Jul 15 23:13:38.053704 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Jul 15 23:13:38.053724 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue Jul 15 22:00:45 -00 2025 Jul 15 23:13:38.053731 kernel: KASLR enabled Jul 15 23:13:38.053735 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jul 15 23:13:38.053740 kernel: printk: legacy bootconsole [pl11] enabled Jul 15 23:13:38.053744 kernel: efi: EFI v2.7 by EDK II Jul 15 23:13:38.053749 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3ead5018 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Jul 15 23:13:38.053753 kernel: random: crng init done Jul 15 23:13:38.053756 kernel: secureboot: Secure boot disabled Jul 15 23:13:38.053760 kernel: ACPI: Early table checksum verification disabled Jul 15 23:13:38.053764 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jul 15 23:13:38.053768 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 23:13:38.053772 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 23:13:38.053776 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jul 15 23:13:38.053782 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 23:13:38.053786 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 23:13:38.053790 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 23:13:38.053795 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 23:13:38.053799 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 23:13:38.053804 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 23:13:38.053808 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jul 15 23:13:38.053812 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 23:13:38.053816 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jul 15 23:13:38.053820 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 15 23:13:38.053824 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jul 15 23:13:38.053829 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Jul 15 23:13:38.053833 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Jul 15 23:13:38.053837 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jul 15 23:13:38.053841 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jul 15 23:13:38.053846 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jul 15 23:13:38.053850 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jul 15 23:13:38.053854 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jul 15 23:13:38.053858 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jul 15 23:13:38.053863 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jul 15 23:13:38.053867 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jul 15 23:13:38.053871 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jul 15 23:13:38.053875 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Jul 15 23:13:38.053879 kernel: NODE_DATA(0) allocated [mem 0x1bf7fda00-0x1bf804fff] Jul 15 23:13:38.053883 kernel: Zone ranges: Jul 15 23:13:38.053888 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jul 15 23:13:38.053895 kernel: DMA32 empty Jul 15 23:13:38.053899 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jul 15 23:13:38.053903 kernel: Device empty Jul 15 23:13:38.053908 kernel: Movable zone start for each node Jul 15 23:13:38.053912 kernel: Early memory node ranges Jul 15 23:13:38.053917 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jul 15 23:13:38.053922 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Jul 15 23:13:38.053926 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Jul 15 23:13:38.053930 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Jul 15 23:13:38.053934 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jul 15 23:13:38.053939 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jul 15 23:13:38.053943 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jul 15 23:13:38.053947 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jul 15 23:13:38.053952 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jul 15 23:13:38.053956 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jul 15 23:13:38.053960 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jul 15 23:13:38.053965 kernel: cma: Reserved 16 MiB at 0x000000003ec00000 on node -1 Jul 15 23:13:38.053970 kernel: psci: probing for conduit method from ACPI. Jul 15 23:13:38.053974 kernel: psci: PSCIv1.1 detected in firmware. Jul 15 23:13:38.053979 kernel: psci: Using standard PSCI v0.2 function IDs Jul 15 23:13:38.053983 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 15 23:13:38.053987 kernel: psci: SMC Calling Convention v1.4 Jul 15 23:13:38.053991 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jul 15 23:13:38.053996 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jul 15 23:13:38.054000 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 15 23:13:38.054004 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 15 23:13:38.054009 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 15 23:13:38.054013 kernel: Detected PIPT I-cache on CPU0 Jul 15 23:13:38.054018 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Jul 15 23:13:38.054023 kernel: CPU features: detected: GIC system register CPU interface Jul 15 23:13:38.054027 kernel: CPU features: detected: Spectre-v4 Jul 15 23:13:38.054031 kernel: CPU features: detected: Spectre-BHB Jul 15 23:13:38.054036 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 15 23:13:38.054040 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 15 23:13:38.054044 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Jul 15 23:13:38.054049 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 15 23:13:38.054053 kernel: alternatives: applying boot alternatives Jul 15 23:13:38.054058 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=6efbcbd16e8e41b645be9f8e34b328753e37d282675200dab08e504f8e58a578 Jul 15 23:13:38.054063 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 23:13:38.054069 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 15 23:13:38.054073 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 15 23:13:38.054078 kernel: Fallback order for Node 0: 0 Jul 15 23:13:38.054082 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Jul 15 23:13:38.054086 kernel: Policy zone: Normal Jul 15 23:13:38.054091 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 23:13:38.054095 kernel: software IO TLB: area num 2. Jul 15 23:13:38.054099 kernel: software IO TLB: mapped [mem 0x000000003a460000-0x000000003e460000] (64MB) Jul 15 23:13:38.054104 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 15 23:13:38.054108 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 15 23:13:38.054113 kernel: rcu: RCU event tracing is enabled. Jul 15 23:13:38.054119 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 15 23:13:38.054123 kernel: Trampoline variant of Tasks RCU enabled. Jul 15 23:13:38.054127 kernel: Tracing variant of Tasks RCU enabled. Jul 15 23:13:38.054132 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 23:13:38.054136 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 15 23:13:38.054141 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 23:13:38.054145 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 23:13:38.054149 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 15 23:13:38.054154 kernel: GICv3: 960 SPIs implemented Jul 15 23:13:38.054158 kernel: GICv3: 0 Extended SPIs implemented Jul 15 23:13:38.054162 kernel: Root IRQ handler: gic_handle_irq Jul 15 23:13:38.054167 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jul 15 23:13:38.054172 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Jul 15 23:13:38.054176 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jul 15 23:13:38.054181 kernel: ITS: No ITS available, not enabling LPIs Jul 15 23:13:38.054185 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 15 23:13:38.054189 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Jul 15 23:13:38.057894 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 15 23:13:38.057910 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Jul 15 23:13:38.057915 kernel: Console: colour dummy device 80x25 Jul 15 23:13:38.057920 kernel: printk: legacy console [tty1] enabled Jul 15 23:13:38.057925 kernel: ACPI: Core revision 20240827 Jul 15 23:13:38.057930 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Jul 15 23:13:38.057939 kernel: pid_max: default: 32768 minimum: 301 Jul 15 23:13:38.057944 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 15 23:13:38.057949 kernel: landlock: Up and running. Jul 15 23:13:38.057953 kernel: SELinux: Initializing. Jul 15 23:13:38.057958 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 23:13:38.057966 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 23:13:38.057972 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x1a0000e, misc 0x31e1 Jul 15 23:13:38.057977 kernel: Hyper-V: Host Build 10.0.26100.1261-1-0 Jul 15 23:13:38.057982 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 15 23:13:38.057987 kernel: rcu: Hierarchical SRCU implementation. Jul 15 23:13:38.057993 kernel: rcu: Max phase no-delay instances is 400. Jul 15 23:13:38.057998 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 15 23:13:38.058003 kernel: Remapping and enabling EFI services. Jul 15 23:13:38.058008 kernel: smp: Bringing up secondary CPUs ... Jul 15 23:13:38.058013 kernel: Detected PIPT I-cache on CPU1 Jul 15 23:13:38.058018 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jul 15 23:13:38.058023 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Jul 15 23:13:38.058028 kernel: smp: Brought up 1 node, 2 CPUs Jul 15 23:13:38.058033 kernel: SMP: Total of 2 processors activated. Jul 15 23:13:38.058038 kernel: CPU: All CPU(s) started at EL1 Jul 15 23:13:38.058043 kernel: CPU features: detected: 32-bit EL0 Support Jul 15 23:13:38.058048 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jul 15 23:13:38.058053 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 15 23:13:38.058057 kernel: CPU features: detected: Common not Private translations Jul 15 23:13:38.058062 kernel: CPU features: detected: CRC32 instructions Jul 15 23:13:38.058068 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Jul 15 23:13:38.058073 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 15 23:13:38.058078 kernel: CPU features: detected: LSE atomic instructions Jul 15 23:13:38.058083 kernel: CPU features: detected: Privileged Access Never Jul 15 23:13:38.058087 kernel: CPU features: detected: Speculation barrier (SB) Jul 15 23:13:38.058092 kernel: CPU features: detected: TLB range maintenance instructions Jul 15 23:13:38.058097 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 15 23:13:38.058102 kernel: CPU features: detected: Scalable Vector Extension Jul 15 23:13:38.058107 kernel: alternatives: applying system-wide alternatives Jul 15 23:13:38.058112 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jul 15 23:13:38.058117 kernel: SVE: maximum available vector length 16 bytes per vector Jul 15 23:13:38.058122 kernel: SVE: default vector length 16 bytes per vector Jul 15 23:13:38.058127 kernel: Memory: 3959092K/4194160K available (11136K kernel code, 2436K rwdata, 9076K rodata, 39488K init, 1038K bss, 213880K reserved, 16384K cma-reserved) Jul 15 23:13:38.058132 kernel: devtmpfs: initialized Jul 15 23:13:38.058137 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 23:13:38.058142 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 15 23:13:38.058147 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 15 23:13:38.058152 kernel: 0 pages in range for non-PLT usage Jul 15 23:13:38.058157 kernel: 508432 pages in range for PLT usage Jul 15 23:13:38.058162 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 23:13:38.058167 kernel: SMBIOS 3.1.0 present. Jul 15 23:13:38.058172 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jul 15 23:13:38.058177 kernel: DMI: Memory slots populated: 2/2 Jul 15 23:13:38.058181 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 23:13:38.058186 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 15 23:13:38.058191 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 15 23:13:38.058205 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 15 23:13:38.058211 kernel: audit: initializing netlink subsys (disabled) Jul 15 23:13:38.058216 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Jul 15 23:13:38.058220 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 23:13:38.058225 kernel: cpuidle: using governor menu Jul 15 23:13:38.058230 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 15 23:13:38.058235 kernel: ASID allocator initialised with 32768 entries Jul 15 23:13:38.058240 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 23:13:38.058245 kernel: Serial: AMBA PL011 UART driver Jul 15 23:13:38.058249 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 23:13:38.058255 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 15 23:13:38.058260 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 15 23:13:38.058265 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 15 23:13:38.058270 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 23:13:38.058275 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 15 23:13:38.058280 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 15 23:13:38.058284 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 15 23:13:38.058289 kernel: ACPI: Added _OSI(Module Device) Jul 15 23:13:38.058294 kernel: ACPI: Added _OSI(Processor Device) Jul 15 23:13:38.058299 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 23:13:38.058304 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 15 23:13:38.058309 kernel: ACPI: Interpreter enabled Jul 15 23:13:38.058314 kernel: ACPI: Using GIC for interrupt routing Jul 15 23:13:38.058319 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jul 15 23:13:38.058324 kernel: printk: legacy console [ttyAMA0] enabled Jul 15 23:13:38.058328 kernel: printk: legacy bootconsole [pl11] disabled Jul 15 23:13:38.058333 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jul 15 23:13:38.058338 kernel: ACPI: CPU0 has been hot-added Jul 15 23:13:38.058344 kernel: ACPI: CPU1 has been hot-added Jul 15 23:13:38.058349 kernel: iommu: Default domain type: Translated Jul 15 23:13:38.058353 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 15 23:13:38.058358 kernel: efivars: Registered efivars operations Jul 15 23:13:38.058363 kernel: vgaarb: loaded Jul 15 23:13:38.058368 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 15 23:13:38.058373 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 23:13:38.058378 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 23:13:38.058382 kernel: pnp: PnP ACPI init Jul 15 23:13:38.058390 kernel: pnp: PnP ACPI: found 0 devices Jul 15 23:13:38.058395 kernel: NET: Registered PF_INET protocol family Jul 15 23:13:38.058400 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 15 23:13:38.058405 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 15 23:13:38.058410 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 23:13:38.058415 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 15 23:13:38.058420 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 15 23:13:38.058425 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 15 23:13:38.058429 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 23:13:38.058435 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 23:13:38.058440 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 23:13:38.058445 kernel: PCI: CLS 0 bytes, default 64 Jul 15 23:13:38.058450 kernel: kvm [1]: HYP mode not available Jul 15 23:13:38.058455 kernel: Initialise system trusted keyrings Jul 15 23:13:38.058459 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 15 23:13:38.058464 kernel: Key type asymmetric registered Jul 15 23:13:38.058469 kernel: Asymmetric key parser 'x509' registered Jul 15 23:13:38.058473 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 15 23:13:38.058479 kernel: io scheduler mq-deadline registered Jul 15 23:13:38.058484 kernel: io scheduler kyber registered Jul 15 23:13:38.058489 kernel: io scheduler bfq registered Jul 15 23:13:38.058493 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 23:13:38.058498 kernel: thunder_xcv, ver 1.0 Jul 15 23:13:38.058503 kernel: thunder_bgx, ver 1.0 Jul 15 23:13:38.058508 kernel: nicpf, ver 1.0 Jul 15 23:13:38.058512 kernel: nicvf, ver 1.0 Jul 15 23:13:38.058669 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 15 23:13:38.058725 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-15T23:13:37 UTC (1752621217) Jul 15 23:13:38.058732 kernel: efifb: probing for efifb Jul 15 23:13:38.058737 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 15 23:13:38.058742 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 15 23:13:38.058747 kernel: efifb: scrolling: redraw Jul 15 23:13:38.058751 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 15 23:13:38.058756 kernel: Console: switching to colour frame buffer device 128x48 Jul 15 23:13:38.058761 kernel: fb0: EFI VGA frame buffer device Jul 15 23:13:38.058767 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jul 15 23:13:38.058772 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 15 23:13:38.058777 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 15 23:13:38.058782 kernel: NET: Registered PF_INET6 protocol family Jul 15 23:13:38.058786 kernel: watchdog: NMI not fully supported Jul 15 23:13:38.058791 kernel: watchdog: Hard watchdog permanently disabled Jul 15 23:13:38.058796 kernel: Segment Routing with IPv6 Jul 15 23:13:38.058800 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 23:13:38.058805 kernel: NET: Registered PF_PACKET protocol family Jul 15 23:13:38.058811 kernel: Key type dns_resolver registered Jul 15 23:13:38.058816 kernel: registered taskstats version 1 Jul 15 23:13:38.058820 kernel: Loading compiled-in X.509 certificates Jul 15 23:13:38.058825 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 2e049b1166d7080a2074348abe7e86e115624bdd' Jul 15 23:13:38.058830 kernel: Demotion targets for Node 0: null Jul 15 23:13:38.058835 kernel: Key type .fscrypt registered Jul 15 23:13:38.058839 kernel: Key type fscrypt-provisioning registered Jul 15 23:13:38.058844 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 15 23:13:38.058849 kernel: ima: Allocated hash algorithm: sha1 Jul 15 23:13:38.058855 kernel: ima: No architecture policies found Jul 15 23:13:38.058860 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 15 23:13:38.058864 kernel: clk: Disabling unused clocks Jul 15 23:13:38.058869 kernel: PM: genpd: Disabling unused power domains Jul 15 23:13:38.058874 kernel: Warning: unable to open an initial console. Jul 15 23:13:38.058879 kernel: Freeing unused kernel memory: 39488K Jul 15 23:13:38.058884 kernel: Run /init as init process Jul 15 23:13:38.058888 kernel: with arguments: Jul 15 23:13:38.058893 kernel: /init Jul 15 23:13:38.058899 kernel: with environment: Jul 15 23:13:38.058903 kernel: HOME=/ Jul 15 23:13:38.058908 kernel: TERM=linux Jul 15 23:13:38.058913 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 23:13:38.058919 systemd[1]: Successfully made /usr/ read-only. Jul 15 23:13:38.058926 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 23:13:38.058932 systemd[1]: Detected virtualization microsoft. Jul 15 23:13:38.058938 systemd[1]: Detected architecture arm64. Jul 15 23:13:38.058943 systemd[1]: Running in initrd. Jul 15 23:13:38.058948 systemd[1]: No hostname configured, using default hostname. Jul 15 23:13:38.058953 systemd[1]: Hostname set to . Jul 15 23:13:38.058958 systemd[1]: Initializing machine ID from random generator. Jul 15 23:13:38.058964 systemd[1]: Queued start job for default target initrd.target. Jul 15 23:13:38.058969 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:13:38.058974 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:13:38.058980 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 15 23:13:38.058986 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 23:13:38.058992 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 15 23:13:38.058998 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 15 23:13:38.059004 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 15 23:13:38.059009 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 15 23:13:38.059014 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:13:38.059020 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:13:38.059026 systemd[1]: Reached target paths.target - Path Units. Jul 15 23:13:38.059031 systemd[1]: Reached target slices.target - Slice Units. Jul 15 23:13:38.059036 systemd[1]: Reached target swap.target - Swaps. Jul 15 23:13:38.059041 systemd[1]: Reached target timers.target - Timer Units. Jul 15 23:13:38.059046 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 23:13:38.059052 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 23:13:38.059057 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 15 23:13:38.059062 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 15 23:13:38.059068 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:13:38.059073 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 23:13:38.059079 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:13:38.059084 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 23:13:38.059089 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 15 23:13:38.059094 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 23:13:38.059099 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 15 23:13:38.059105 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 15 23:13:38.059111 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 23:13:38.059116 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 23:13:38.059121 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 23:13:38.059144 systemd-journald[224]: Collecting audit messages is disabled. Jul 15 23:13:38.059159 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:13:38.059166 systemd-journald[224]: Journal started Jul 15 23:13:38.059180 systemd-journald[224]: Runtime Journal (/run/log/journal/f05627db6fe0405ead396961bcab26f4) is 8M, max 78.5M, 70.5M free. Jul 15 23:13:38.059538 systemd-modules-load[226]: Inserted module 'overlay' Jul 15 23:13:38.078926 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 23:13:38.078950 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 23:13:38.087426 systemd-modules-load[226]: Inserted module 'br_netfilter' Jul 15 23:13:38.091873 kernel: Bridge firewalling registered Jul 15 23:13:38.091742 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 15 23:13:38.097531 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:13:38.109820 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 23:13:38.115795 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 23:13:38.128822 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:13:38.136348 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 15 23:13:38.159823 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:13:38.171362 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 23:13:38.179122 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 23:13:38.202708 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:13:38.207781 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 23:13:38.217458 systemd-tmpfiles[249]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 15 23:13:38.219757 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 23:13:38.227576 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:13:38.240790 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 15 23:13:38.268118 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 23:13:38.280170 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 23:13:38.300170 dracut-cmdline[261]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=6efbcbd16e8e41b645be9f8e34b328753e37d282675200dab08e504f8e58a578 Jul 15 23:13:38.330685 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:13:38.347109 systemd-resolved[262]: Positive Trust Anchors: Jul 15 23:13:38.347125 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 23:13:38.347146 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 23:13:38.348925 systemd-resolved[262]: Defaulting to hostname 'linux'. Jul 15 23:13:38.350642 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 23:13:38.355885 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:13:38.433223 kernel: SCSI subsystem initialized Jul 15 23:13:38.439213 kernel: Loading iSCSI transport class v2.0-870. Jul 15 23:13:38.447229 kernel: iscsi: registered transport (tcp) Jul 15 23:13:38.459682 kernel: iscsi: registered transport (qla4xxx) Jul 15 23:13:38.459724 kernel: QLogic iSCSI HBA Driver Jul 15 23:13:38.476266 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 23:13:38.498295 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:13:38.505438 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 23:13:38.555981 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 15 23:13:38.561886 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 15 23:13:38.623215 kernel: raid6: neonx8 gen() 18550 MB/s Jul 15 23:13:38.658206 kernel: raid6: neonx4 gen() 18546 MB/s Jul 15 23:13:38.675209 kernel: raid6: neonx2 gen() 17090 MB/s Jul 15 23:13:38.681203 kernel: raid6: neonx1 gen() 15020 MB/s Jul 15 23:13:38.700201 kernel: raid6: int64x8 gen() 10526 MB/s Jul 15 23:13:38.719200 kernel: raid6: int64x4 gen() 10611 MB/s Jul 15 23:13:38.739200 kernel: raid6: int64x2 gen() 8982 MB/s Jul 15 23:13:38.760438 kernel: raid6: int64x1 gen() 7007 MB/s Jul 15 23:13:38.760490 kernel: raid6: using algorithm neonx8 gen() 18550 MB/s Jul 15 23:13:38.782814 kernel: raid6: .... xor() 14906 MB/s, rmw enabled Jul 15 23:13:38.782822 kernel: raid6: using neon recovery algorithm Jul 15 23:13:38.789201 kernel: xor: measuring software checksum speed Jul 15 23:13:38.794329 kernel: 8regs : 27244 MB/sec Jul 15 23:13:38.794335 kernel: 32regs : 28822 MB/sec Jul 15 23:13:38.797184 kernel: arm64_neon : 37590 MB/sec Jul 15 23:13:38.800177 kernel: xor: using function: arm64_neon (37590 MB/sec) Jul 15 23:13:38.839277 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 15 23:13:38.844191 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 15 23:13:38.853838 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:13:38.885330 systemd-udevd[473]: Using default interface naming scheme 'v255'. Jul 15 23:13:38.889882 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:13:38.903083 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 15 23:13:38.937748 dracut-pre-trigger[482]: rd.md=0: removing MD RAID activation Jul 15 23:13:38.958125 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 23:13:38.968307 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 23:13:39.017533 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:13:39.029668 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 15 23:13:39.094212 kernel: hv_vmbus: Vmbus version:5.3 Jul 15 23:13:39.108999 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 23:13:39.123915 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 15 23:13:39.123934 kernel: hv_vmbus: registering driver hid_hyperv Jul 15 23:13:39.123941 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 15 23:13:39.123947 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 15 23:13:39.109132 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:13:39.158904 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jul 15 23:13:39.158927 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jul 15 23:13:39.158935 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 15 23:13:39.159087 kernel: hv_vmbus: registering driver hv_netvsc Jul 15 23:13:39.159094 kernel: PTP clock support registered Jul 15 23:13:39.123794 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:13:39.190311 kernel: hv_utils: Registering HyperV Utility Driver Jul 15 23:13:39.190333 kernel: hv_vmbus: registering driver hv_utils Jul 15 23:13:39.190340 kernel: hv_vmbus: registering driver hv_storvsc Jul 15 23:13:39.190347 kernel: hv_utils: Heartbeat IC version 3.0 Jul 15 23:13:39.190353 kernel: hv_utils: Shutdown IC version 3.2 Jul 15 23:13:39.134719 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:13:39.602412 kernel: hv_utils: TimeSync IC version 4.0 Jul 15 23:13:39.602432 kernel: scsi host1: storvsc_host_t Jul 15 23:13:39.602610 kernel: scsi host0: storvsc_host_t Jul 15 23:13:39.173715 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:13:39.623147 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 15 23:13:39.623205 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jul 15 23:13:39.181884 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 23:13:39.181979 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:13:39.593249 systemd-resolved[262]: Clock change detected. Flushing caches. Jul 15 23:13:39.594652 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:13:39.645646 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:13:39.666438 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 15 23:13:39.666651 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 15 23:13:39.669614 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 15 23:13:39.674925 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 15 23:13:39.675101 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 15 23:13:39.681452 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#305 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 15 23:13:39.687933 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#312 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 15 23:13:39.694442 kernel: hv_netvsc 0022487e-4cd1-0022-487e-4cd10022487e eth0: VF slot 1 added Jul 15 23:13:39.702562 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 23:13:39.702615 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 15 23:13:39.705602 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 15 23:13:39.705788 kernel: hv_vmbus: registering driver hv_pci Jul 15 23:13:39.711926 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 15 23:13:39.711967 kernel: hv_pci 506853c6-6d18-4a08-a1a0-a14a2b22d8c8: PCI VMBus probing: Using version 0x10004 Jul 15 23:13:39.717955 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 15 23:13:39.727868 kernel: hv_pci 506853c6-6d18-4a08-a1a0-a14a2b22d8c8: PCI host bridge to bus 6d18:00 Jul 15 23:13:39.728072 kernel: pci_bus 6d18:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jul 15 23:13:39.728164 kernel: pci_bus 6d18:00: No busn resource found for root bus, will use [bus 00-ff] Jul 15 23:13:39.738136 kernel: pci 6d18:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Jul 15 23:13:39.743326 kernel: pci 6d18:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 15 23:13:39.754115 kernel: pci 6d18:00:02.0: enabling Extended Tags Jul 15 23:13:39.754226 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#261 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 15 23:13:39.754412 kernel: pci 6d18:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 6d18:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Jul 15 23:13:39.773406 kernel: pci_bus 6d18:00: busn_res: [bus 00-ff] end is updated to 00 Jul 15 23:13:39.773631 kernel: pci 6d18:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Jul 15 23:13:39.787297 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 15 23:13:39.840454 kernel: mlx5_core 6d18:00:02.0: enabling device (0000 -> 0002) Jul 15 23:13:39.847859 kernel: mlx5_core 6d18:00:02.0: PTM is not supported by PCIe Jul 15 23:13:39.847986 kernel: mlx5_core 6d18:00:02.0: firmware version: 16.30.5006 Jul 15 23:13:40.021075 kernel: hv_netvsc 0022487e-4cd1-0022-487e-4cd10022487e eth0: VF registering: eth1 Jul 15 23:13:40.021306 kernel: mlx5_core 6d18:00:02.0 eth1: joined to eth0 Jul 15 23:13:40.026704 kernel: mlx5_core 6d18:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jul 15 23:13:40.036296 kernel: mlx5_core 6d18:00:02.0 enP27928s1: renamed from eth1 Jul 15 23:13:40.236484 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 15 23:13:40.261976 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 15 23:13:40.292783 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 15 23:13:40.316350 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 15 23:13:40.326809 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 15 23:13:40.334301 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 15 23:13:40.344972 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 23:13:40.353516 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:13:40.363053 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 23:13:40.372381 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 15 23:13:40.397057 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 15 23:13:40.417305 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#85 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 15 23:13:40.425452 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 15 23:13:40.437283 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 23:13:40.445292 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#102 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 15 23:13:40.451293 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 23:13:41.458296 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#299 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 15 23:13:41.469393 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 23:13:41.470291 disk-uuid[655]: The operation has completed successfully. Jul 15 23:13:41.538030 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 23:13:41.538139 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 15 23:13:41.569011 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 15 23:13:41.586501 sh[820]: Success Jul 15 23:13:41.616928 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 23:13:41.616999 kernel: device-mapper: uevent: version 1.0.3 Jul 15 23:13:41.621543 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 15 23:13:41.630744 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 15 23:13:41.778012 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 15 23:13:41.786753 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 15 23:13:41.797391 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 15 23:13:41.822923 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 15 23:13:41.822970 kernel: BTRFS: device fsid e70e9257-c19d-4e0a-b2ee-631da7d0eb2b devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (838) Jul 15 23:13:41.828799 kernel: BTRFS info (device dm-0): first mount of filesystem e70e9257-c19d-4e0a-b2ee-631da7d0eb2b Jul 15 23:13:41.833086 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 15 23:13:41.836182 kernel: BTRFS info (device dm-0): using free-space-tree Jul 15 23:13:42.039827 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 15 23:13:42.043783 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 15 23:13:42.051494 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 15 23:13:42.052238 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 15 23:13:42.075107 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 15 23:13:42.106352 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (861) Jul 15 23:13:42.106416 kernel: BTRFS info (device sda6): first mount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:13:42.111876 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 23:13:42.116872 kernel: BTRFS info (device sda6): using free-space-tree Jul 15 23:13:42.139317 kernel: BTRFS info (device sda6): last unmount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:13:42.142388 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 15 23:13:42.150258 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 15 23:13:42.209607 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 23:13:42.221432 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 23:13:42.255673 systemd-networkd[1007]: lo: Link UP Jul 15 23:13:42.255680 systemd-networkd[1007]: lo: Gained carrier Jul 15 23:13:42.257010 systemd-networkd[1007]: Enumeration completed Jul 15 23:13:42.259108 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 23:13:42.263591 systemd[1]: Reached target network.target - Network. Jul 15 23:13:42.266155 systemd-networkd[1007]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:13:42.266159 systemd-networkd[1007]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 23:13:42.338299 kernel: mlx5_core 6d18:00:02.0 enP27928s1: Link up Jul 15 23:13:42.371299 kernel: hv_netvsc 0022487e-4cd1-0022-487e-4cd10022487e eth0: Data path switched to VF: enP27928s1 Jul 15 23:13:42.371722 systemd-networkd[1007]: enP27928s1: Link UP Jul 15 23:13:42.371925 systemd-networkd[1007]: eth0: Link UP Jul 15 23:13:42.372268 systemd-networkd[1007]: eth0: Gained carrier Jul 15 23:13:42.372293 systemd-networkd[1007]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:13:42.392488 systemd-networkd[1007]: enP27928s1: Gained carrier Jul 15 23:13:42.413357 systemd-networkd[1007]: eth0: DHCPv4 address 10.200.20.18/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 15 23:13:42.904659 ignition[944]: Ignition 2.21.0 Jul 15 23:13:42.907334 ignition[944]: Stage: fetch-offline Jul 15 23:13:42.907479 ignition[944]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:13:42.911618 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 23:13:42.907487 ignition[944]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 15 23:13:42.921902 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 15 23:13:42.907653 ignition[944]: parsed url from cmdline: "" Jul 15 23:13:42.907660 ignition[944]: no config URL provided Jul 15 23:13:42.907667 ignition[944]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 23:13:42.907684 ignition[944]: no config at "/usr/lib/ignition/user.ign" Jul 15 23:13:42.907688 ignition[944]: failed to fetch config: resource requires networking Jul 15 23:13:42.907998 ignition[944]: Ignition finished successfully Jul 15 23:13:42.963031 ignition[1018]: Ignition 2.21.0 Jul 15 23:13:42.963049 ignition[1018]: Stage: fetch Jul 15 23:13:42.963228 ignition[1018]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:13:42.963236 ignition[1018]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 15 23:13:42.963337 ignition[1018]: parsed url from cmdline: "" Jul 15 23:13:42.963340 ignition[1018]: no config URL provided Jul 15 23:13:42.963343 ignition[1018]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 23:13:42.963348 ignition[1018]: no config at "/usr/lib/ignition/user.ign" Jul 15 23:13:42.963375 ignition[1018]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 15 23:13:43.059133 ignition[1018]: GET result: OK Jul 15 23:13:43.059409 ignition[1018]: config has been read from IMDS userdata Jul 15 23:13:43.062345 unknown[1018]: fetched base config from "system" Jul 15 23:13:43.059433 ignition[1018]: parsing config with SHA512: b4bace5886d61a89df31e5074b5536495d24991b31594a6ccca3fdd94f2618400bf46291cb01534cf5b0d3f6cf6c7dc3031ac6250c2edd4eba500bcad913c356 Jul 15 23:13:43.062350 unknown[1018]: fetched base config from "system" Jul 15 23:13:43.062556 ignition[1018]: fetch: fetch complete Jul 15 23:13:43.062354 unknown[1018]: fetched user config from "azure" Jul 15 23:13:43.062561 ignition[1018]: fetch: fetch passed Jul 15 23:13:43.069792 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 15 23:13:43.062612 ignition[1018]: Ignition finished successfully Jul 15 23:13:43.075960 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 15 23:13:43.113448 ignition[1024]: Ignition 2.21.0 Jul 15 23:13:43.116051 ignition[1024]: Stage: kargs Jul 15 23:13:43.116302 ignition[1024]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:13:43.120833 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 15 23:13:43.116312 ignition[1024]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 15 23:13:43.128856 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 15 23:13:43.117576 ignition[1024]: kargs: kargs passed Jul 15 23:13:43.117642 ignition[1024]: Ignition finished successfully Jul 15 23:13:43.158640 ignition[1030]: Ignition 2.21.0 Jul 15 23:13:43.158655 ignition[1030]: Stage: disks Jul 15 23:13:43.163058 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 15 23:13:43.158955 ignition[1030]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:13:43.169659 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 15 23:13:43.158968 ignition[1030]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 15 23:13:43.180122 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 15 23:13:43.160157 ignition[1030]: disks: disks passed Jul 15 23:13:43.189056 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 23:13:43.160214 ignition[1030]: Ignition finished successfully Jul 15 23:13:43.198217 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 23:13:43.206795 systemd[1]: Reached target basic.target - Basic System. Jul 15 23:13:43.216026 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 15 23:13:43.285314 systemd-fsck[1038]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jul 15 23:13:43.294181 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 15 23:13:43.306871 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 15 23:13:43.459305 kernel: EXT4-fs (sda9): mounted filesystem db08fdf6-07fd-45a1-bb3b-a7d0399d70fd r/w with ordered data mode. Quota mode: none. Jul 15 23:13:43.460322 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 15 23:13:43.467096 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 15 23:13:43.485549 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 23:13:43.503387 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 15 23:13:43.507188 systemd-networkd[1007]: eth0: Gained IPv6LL Jul 15 23:13:43.515491 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 15 23:13:43.532059 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1052) Jul 15 23:13:43.532358 kernel: BTRFS info (device sda6): first mount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:13:43.537817 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 23:13:43.557407 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 23:13:43.557432 kernel: BTRFS info (device sda6): using free-space-tree Jul 15 23:13:43.537862 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 23:13:43.561403 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 15 23:13:43.572234 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 15 23:13:43.578231 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 23:13:44.017406 systemd-networkd[1007]: enP27928s1: Gained IPv6LL Jul 15 23:13:44.089129 coreos-metadata[1054]: Jul 15 23:13:44.089 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 15 23:13:44.099134 coreos-metadata[1054]: Jul 15 23:13:44.099 INFO Fetch successful Jul 15 23:13:44.103169 coreos-metadata[1054]: Jul 15 23:13:44.099 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 15 23:13:44.112350 coreos-metadata[1054]: Jul 15 23:13:44.109 INFO Fetch successful Jul 15 23:13:44.112350 coreos-metadata[1054]: Jul 15 23:13:44.109 INFO wrote hostname ci-4372.0.1-n-7d7ad51cdd to /sysroot/etc/hostname Jul 15 23:13:44.112473 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 15 23:13:44.302612 initrd-setup-root[1082]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 23:13:44.320297 initrd-setup-root[1089]: cut: /sysroot/etc/group: No such file or directory Jul 15 23:13:44.339574 initrd-setup-root[1096]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 23:13:44.346641 initrd-setup-root[1103]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 23:13:44.890421 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 15 23:13:44.897243 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 15 23:13:44.924017 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 15 23:13:44.940437 kernel: BTRFS info (device sda6): last unmount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:13:44.929987 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 15 23:13:44.962441 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 15 23:13:44.974161 ignition[1171]: INFO : Ignition 2.21.0 Jul 15 23:13:44.974161 ignition[1171]: INFO : Stage: mount Jul 15 23:13:44.981478 ignition[1171]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:13:44.981478 ignition[1171]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 15 23:13:44.981478 ignition[1171]: INFO : mount: mount passed Jul 15 23:13:44.981478 ignition[1171]: INFO : Ignition finished successfully Jul 15 23:13:44.983321 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 15 23:13:44.991292 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 15 23:13:45.017014 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 23:13:45.049606 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1183) Jul 15 23:13:45.049663 kernel: BTRFS info (device sda6): first mount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:13:45.053777 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 23:13:45.056725 kernel: BTRFS info (device sda6): using free-space-tree Jul 15 23:13:45.059647 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 23:13:45.087840 ignition[1200]: INFO : Ignition 2.21.0 Jul 15 23:13:45.087840 ignition[1200]: INFO : Stage: files Jul 15 23:13:45.096000 ignition[1200]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:13:45.096000 ignition[1200]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 15 23:13:45.096000 ignition[1200]: DEBUG : files: compiled without relabeling support, skipping Jul 15 23:13:45.096000 ignition[1200]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 23:13:45.096000 ignition[1200]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 23:13:45.119860 ignition[1200]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 23:13:45.119860 ignition[1200]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 23:13:45.119860 ignition[1200]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 23:13:45.104256 unknown[1200]: wrote ssh authorized keys file for user: core Jul 15 23:13:45.146105 ignition[1200]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 15 23:13:45.154025 ignition[1200]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jul 15 23:13:45.314232 ignition[1200]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 15 23:13:45.778136 ignition[1200]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 15 23:13:45.778136 ignition[1200]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 15 23:13:45.794355 ignition[1200]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 23:13:45.794355 ignition[1200]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 23:13:45.794355 ignition[1200]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 23:13:45.794355 ignition[1200]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 23:13:45.794355 ignition[1200]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 23:13:45.794355 ignition[1200]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 23:13:45.794355 ignition[1200]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 23:13:45.845587 ignition[1200]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 23:13:45.845587 ignition[1200]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 23:13:45.845587 ignition[1200]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 15 23:13:45.845587 ignition[1200]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 15 23:13:45.845587 ignition[1200]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 15 23:13:45.845587 ignition[1200]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jul 15 23:13:46.309091 ignition[1200]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 15 23:13:46.514342 ignition[1200]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 15 23:13:46.514342 ignition[1200]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 15 23:13:46.547928 ignition[1200]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 23:13:46.562524 ignition[1200]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 23:13:46.562524 ignition[1200]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 15 23:13:46.562524 ignition[1200]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 15 23:13:46.592629 ignition[1200]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 23:13:46.592629 ignition[1200]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 23:13:46.592629 ignition[1200]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 23:13:46.592629 ignition[1200]: INFO : files: files passed Jul 15 23:13:46.592629 ignition[1200]: INFO : Ignition finished successfully Jul 15 23:13:46.571426 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 15 23:13:46.581771 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 15 23:13:46.621474 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 15 23:13:46.630333 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 23:13:46.640228 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 15 23:13:46.655886 initrd-setup-root-after-ignition[1229]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:13:46.655886 initrd-setup-root-after-ignition[1229]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:13:46.655251 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 23:13:46.686197 initrd-setup-root-after-ignition[1234]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:13:46.661127 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 15 23:13:46.672769 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 15 23:13:46.737234 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 23:13:46.737353 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 15 23:13:46.746384 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 15 23:13:46.754734 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 15 23:13:46.762633 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 15 23:13:46.764435 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 15 23:13:46.800447 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 23:13:46.807259 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 15 23:13:46.832011 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:13:46.836758 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:13:46.846060 systemd[1]: Stopped target timers.target - Timer Units. Jul 15 23:13:46.853937 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 23:13:46.854062 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 23:13:46.865548 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 15 23:13:46.870069 systemd[1]: Stopped target basic.target - Basic System. Jul 15 23:13:46.878708 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 15 23:13:46.887709 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 23:13:46.896513 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 15 23:13:46.905743 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 15 23:13:46.914539 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 15 23:13:46.922843 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 23:13:46.931736 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 15 23:13:46.939768 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 15 23:13:46.948374 systemd[1]: Stopped target swap.target - Swaps. Jul 15 23:13:46.955519 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 23:13:46.955680 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 15 23:13:46.967582 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:13:46.975858 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:13:46.985745 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 15 23:13:46.990205 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:13:46.996393 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 23:13:46.996538 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 15 23:13:47.009412 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 23:13:47.009573 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 23:13:47.020065 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 23:13:47.020195 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 15 23:13:47.027862 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 15 23:13:47.027969 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 15 23:13:47.038377 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 15 23:13:47.099007 ignition[1254]: INFO : Ignition 2.21.0 Jul 15 23:13:47.099007 ignition[1254]: INFO : Stage: umount Jul 15 23:13:47.099007 ignition[1254]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:13:47.099007 ignition[1254]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 15 23:13:47.099007 ignition[1254]: INFO : umount: umount passed Jul 15 23:13:47.099007 ignition[1254]: INFO : Ignition finished successfully Jul 15 23:13:47.047465 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 15 23:13:47.068376 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 23:13:47.068645 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:13:47.078717 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 23:13:47.078901 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 23:13:47.092534 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 23:13:47.097538 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 23:13:47.097652 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 15 23:13:47.103596 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 23:13:47.103689 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 15 23:13:47.112681 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 23:13:47.114300 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 15 23:13:47.119451 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 23:13:47.119560 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 15 23:13:47.127188 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 23:13:47.127249 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 15 23:13:47.135311 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 15 23:13:47.135349 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 15 23:13:47.142347 systemd[1]: Stopped target network.target - Network. Jul 15 23:13:47.148831 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 23:13:47.148887 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 23:13:47.157004 systemd[1]: Stopped target paths.target - Path Units. Jul 15 23:13:47.164474 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 23:13:47.164531 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:13:47.174016 systemd[1]: Stopped target slices.target - Slice Units. Jul 15 23:13:47.181984 systemd[1]: Stopped target sockets.target - Socket Units. Jul 15 23:13:47.190632 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 23:13:47.190682 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 23:13:47.197893 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 23:13:47.197913 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 23:13:47.205985 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 23:13:47.206038 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 15 23:13:47.213638 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 15 23:13:47.213663 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 15 23:13:47.222703 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 23:13:47.222738 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 15 23:13:47.231928 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 15 23:13:47.239067 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 15 23:13:47.255160 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 23:13:47.440718 kernel: hv_netvsc 0022487e-4cd1-0022-487e-4cd10022487e eth0: Data path switched from VF: enP27928s1 Jul 15 23:13:47.255294 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 15 23:13:47.268841 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 15 23:13:47.269198 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 23:13:47.269337 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 15 23:13:47.280830 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 15 23:13:47.281512 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 15 23:13:47.289430 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 23:13:47.289494 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:13:47.300266 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 15 23:13:47.313609 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 23:13:47.313696 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 23:13:47.323871 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 23:13:47.323937 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:13:47.337578 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 23:13:47.337628 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 15 23:13:47.343610 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 15 23:13:47.343645 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:13:47.356383 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:13:47.364826 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 23:13:47.364907 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:13:47.395172 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 23:13:47.395369 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:13:47.404716 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 23:13:47.404753 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 15 23:13:47.412881 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 23:13:47.412902 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:13:47.421921 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 23:13:47.421978 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 15 23:13:47.440817 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 23:13:47.440879 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 15 23:13:47.449343 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 23:13:47.449384 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 23:13:47.460718 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 15 23:13:47.470390 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 15 23:13:47.470466 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:13:47.479514 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 15 23:13:47.479564 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:13:47.690854 systemd-journald[224]: Received SIGTERM from PID 1 (systemd). Jul 15 23:13:47.493367 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 15 23:13:47.493447 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 23:13:47.503829 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 23:13:47.503895 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:13:47.509162 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 23:13:47.509204 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:13:47.524344 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 15 23:13:47.524400 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 15 23:13:47.524423 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 15 23:13:47.524456 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:13:47.524709 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 23:13:47.524812 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 15 23:13:47.559337 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 23:13:47.559637 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 15 23:13:47.567192 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 15 23:13:47.578450 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 15 23:13:47.612459 systemd[1]: Switching root. Jul 15 23:13:47.776352 systemd-journald[224]: Journal stopped Jul 15 23:13:52.876100 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 23:13:52.876120 kernel: SELinux: policy capability open_perms=1 Jul 15 23:13:52.876127 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 23:13:52.876132 kernel: SELinux: policy capability always_check_network=0 Jul 15 23:13:52.876139 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 23:13:52.876144 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 23:13:52.876151 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 23:13:52.876156 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 23:13:52.876161 kernel: SELinux: policy capability userspace_initial_context=0 Jul 15 23:13:52.876166 kernel: audit: type=1403 audit(1752621228.773:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 15 23:13:52.876174 systemd[1]: Successfully loaded SELinux policy in 142.292ms. Jul 15 23:13:52.876182 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.963ms. Jul 15 23:13:52.876188 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 23:13:52.876194 systemd[1]: Detected virtualization microsoft. Jul 15 23:13:52.876201 systemd[1]: Detected architecture arm64. Jul 15 23:13:52.876208 systemd[1]: Detected first boot. Jul 15 23:13:52.876214 systemd[1]: Hostname set to . Jul 15 23:13:52.876220 systemd[1]: Initializing machine ID from random generator. Jul 15 23:13:52.876226 zram_generator::config[1296]: No configuration found. Jul 15 23:13:52.876234 kernel: NET: Registered PF_VSOCK protocol family Jul 15 23:13:52.876239 systemd[1]: Populated /etc with preset unit settings. Jul 15 23:13:52.876246 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 15 23:13:52.876253 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 15 23:13:52.876259 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 15 23:13:52.876265 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 15 23:13:52.876271 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 15 23:13:52.876291 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 15 23:13:52.876297 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 15 23:13:52.876303 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 15 23:13:52.876310 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 15 23:13:52.876316 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 15 23:13:52.876322 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 15 23:13:52.876328 systemd[1]: Created slice user.slice - User and Session Slice. Jul 15 23:13:52.876334 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:13:52.876340 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:13:52.876346 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 15 23:13:52.876352 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 15 23:13:52.876359 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 15 23:13:52.876366 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 23:13:52.876372 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 15 23:13:52.876380 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:13:52.876386 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:13:52.876392 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 15 23:13:52.876399 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 15 23:13:52.876405 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 15 23:13:52.876412 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 15 23:13:52.876418 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:13:52.876424 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 23:13:52.876430 systemd[1]: Reached target slices.target - Slice Units. Jul 15 23:13:52.876436 systemd[1]: Reached target swap.target - Swaps. Jul 15 23:13:52.876442 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 15 23:13:52.876448 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 15 23:13:52.876456 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 15 23:13:52.876462 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:13:52.876468 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 23:13:52.876474 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:13:52.876481 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 15 23:13:52.876487 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 15 23:13:52.876494 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 15 23:13:52.876500 systemd[1]: Mounting media.mount - External Media Directory... Jul 15 23:13:52.876506 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 15 23:13:52.876513 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 15 23:13:52.876519 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 15 23:13:52.876525 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 23:13:52.876532 systemd[1]: Reached target machines.target - Containers. Jul 15 23:13:52.876538 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 15 23:13:52.876545 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:13:52.876552 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 23:13:52.876558 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 15 23:13:52.876564 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:13:52.876570 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 23:13:52.876576 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:13:52.876582 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 15 23:13:52.876589 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:13:52.876595 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 23:13:52.876602 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 15 23:13:52.876608 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 15 23:13:52.876615 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 15 23:13:52.876621 systemd[1]: Stopped systemd-fsck-usr.service. Jul 15 23:13:52.876627 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:13:52.876634 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 23:13:52.876640 kernel: fuse: init (API version 7.41) Jul 15 23:13:52.876646 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 23:13:52.876653 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 23:13:52.876659 kernel: loop: module loaded Jul 15 23:13:52.876665 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 15 23:13:52.876671 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 15 23:13:52.876677 kernel: ACPI: bus type drm_connector registered Jul 15 23:13:52.876695 systemd-journald[1400]: Collecting audit messages is disabled. Jul 15 23:13:52.876712 systemd-journald[1400]: Journal started Jul 15 23:13:52.876726 systemd-journald[1400]: Runtime Journal (/run/log/journal/5f1cf39dd84948a79fd1202cddf69cbd) is 8M, max 78.5M, 70.5M free. Jul 15 23:13:52.138878 systemd[1]: Queued start job for default target multi-user.target. Jul 15 23:13:52.146851 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 15 23:13:52.147296 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 15 23:13:52.147582 systemd[1]: systemd-journald.service: Consumed 2.490s CPU time. Jul 15 23:13:52.887760 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 23:13:52.894345 systemd[1]: verity-setup.service: Deactivated successfully. Jul 15 23:13:52.894425 systemd[1]: Stopped verity-setup.service. Jul 15 23:13:52.906935 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 23:13:52.907600 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 15 23:13:52.912231 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 15 23:13:52.916925 systemd[1]: Mounted media.mount - External Media Directory. Jul 15 23:13:52.920671 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 15 23:13:52.925549 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 15 23:13:52.929854 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 15 23:13:52.933914 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 15 23:13:52.939879 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:13:52.945002 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 23:13:52.945160 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 15 23:13:52.951725 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:13:52.951864 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:13:52.957675 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 23:13:52.957811 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 23:13:52.962523 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:13:52.962669 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:13:52.967585 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 23:13:52.967716 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 15 23:13:52.972887 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:13:52.973036 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:13:52.977790 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 23:13:52.982761 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:13:52.988358 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 15 23:13:52.993739 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 15 23:13:52.999236 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:13:53.012575 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 23:13:53.018203 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 15 23:13:53.031939 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 15 23:13:53.036618 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 23:13:53.036735 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 23:13:53.042100 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 15 23:13:53.048402 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 15 23:13:53.052883 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:13:53.062313 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 15 23:13:53.068954 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 15 23:13:53.074105 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 23:13:53.075498 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 15 23:13:53.079918 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 23:13:53.081107 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:13:53.086814 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 15 23:13:53.094630 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 23:13:53.101425 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 15 23:13:53.106741 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 15 23:13:53.121137 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 15 23:13:53.126034 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 15 23:13:53.135692 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 15 23:13:53.136297 kernel: loop0: detected capacity change from 0 to 107312 Jul 15 23:13:53.149149 systemd-journald[1400]: Time spent on flushing to /var/log/journal/5f1cf39dd84948a79fd1202cddf69cbd is 23.705ms for 945 entries. Jul 15 23:13:53.149149 systemd-journald[1400]: System Journal (/var/log/journal/5f1cf39dd84948a79fd1202cddf69cbd) is 8M, max 2.6G, 2.6G free. Jul 15 23:13:53.254463 systemd-journald[1400]: Received client request to flush runtime journal. Jul 15 23:13:53.254180 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:13:53.260335 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 15 23:13:53.277926 systemd-tmpfiles[1438]: ACLs are not supported, ignoring. Jul 15 23:13:53.277942 systemd-tmpfiles[1438]: ACLs are not supported, ignoring. Jul 15 23:13:53.282257 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 23:13:53.290509 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 15 23:13:53.299508 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 23:13:53.301609 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 15 23:13:53.488302 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 23:13:53.556296 kernel: loop1: detected capacity change from 0 to 138376 Jul 15 23:13:53.607213 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 15 23:13:53.613569 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 23:13:53.633175 systemd-tmpfiles[1457]: ACLs are not supported, ignoring. Jul 15 23:13:53.633193 systemd-tmpfiles[1457]: ACLs are not supported, ignoring. Jul 15 23:13:53.636847 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:13:53.856309 kernel: loop2: detected capacity change from 0 to 211168 Jul 15 23:13:53.891309 kernel: loop3: detected capacity change from 0 to 28936 Jul 15 23:13:54.033093 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 15 23:13:54.039880 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:13:54.067694 systemd-udevd[1463]: Using default interface naming scheme 'v255'. Jul 15 23:13:54.209297 kernel: loop4: detected capacity change from 0 to 107312 Jul 15 23:13:54.216299 kernel: loop5: detected capacity change from 0 to 138376 Jul 15 23:13:54.225292 kernel: loop6: detected capacity change from 0 to 211168 Jul 15 23:13:54.232295 kernel: loop7: detected capacity change from 0 to 28936 Jul 15 23:13:54.234844 (sd-merge)[1465]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 15 23:13:54.235229 (sd-merge)[1465]: Merged extensions into '/usr'. Jul 15 23:13:54.238639 systemd[1]: Reload requested from client PID 1436 ('systemd-sysext') (unit systemd-sysext.service)... Jul 15 23:13:54.238653 systemd[1]: Reloading... Jul 15 23:13:54.294378 zram_generator::config[1487]: No configuration found. Jul 15 23:13:54.374483 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:13:54.470471 systemd[1]: Reloading finished in 231 ms. Jul 15 23:13:54.481981 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:13:54.493139 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 15 23:13:54.536053 systemd[1]: Starting ensure-sysext.service... Jul 15 23:13:54.546692 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 23:13:54.556329 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 23:13:54.570199 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 15 23:13:54.588473 systemd-tmpfiles[1579]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 15 23:13:54.588499 systemd-tmpfiles[1579]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 15 23:13:54.588684 systemd-tmpfiles[1579]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 23:13:54.588816 systemd-tmpfiles[1579]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 15 23:13:54.589269 systemd-tmpfiles[1579]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 23:13:54.589472 systemd-tmpfiles[1579]: ACLs are not supported, ignoring. Jul 15 23:13:54.589505 systemd-tmpfiles[1579]: ACLs are not supported, ignoring. Jul 15 23:13:54.596564 systemd[1]: Reload requested from client PID 1576 ('systemctl') (unit ensure-sysext.service)... Jul 15 23:13:54.596581 systemd[1]: Reloading... Jul 15 23:13:54.612140 systemd-tmpfiles[1579]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 23:13:54.612149 systemd-tmpfiles[1579]: Skipping /boot Jul 15 23:13:54.644916 systemd-tmpfiles[1579]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 23:13:54.644933 systemd-tmpfiles[1579]: Skipping /boot Jul 15 23:13:54.671324 zram_generator::config[1608]: No configuration found. Jul 15 23:13:54.697321 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#142 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 15 23:13:54.732333 kernel: mousedev: PS/2 mouse device common for all mice Jul 15 23:13:54.754819 kernel: hv_vmbus: registering driver hv_balloon Jul 15 23:13:54.754923 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 15 23:13:54.754937 kernel: hv_balloon: Memory hot add disabled on ARM64 Jul 15 23:13:54.809034 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:13:54.836347 kernel: hv_vmbus: registering driver hyperv_fb Jul 15 23:13:54.848610 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 15 23:13:54.848702 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 15 23:13:54.855627 kernel: Console: switching to colour dummy device 80x25 Jul 15 23:13:54.866457 kernel: Console: switching to colour frame buffer device 128x48 Jul 15 23:13:54.898832 systemd[1]: Reloading finished in 301 ms. Jul 15 23:13:54.910317 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:13:54.939310 kernel: MACsec IEEE 802.1AE Jul 15 23:13:54.945124 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 23:13:54.956139 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 15 23:13:54.965745 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:13:54.969412 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:13:54.981535 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:13:54.990069 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:13:54.995786 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:13:54.995910 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:13:54.997125 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 15 23:13:55.012763 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 23:13:55.019428 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 15 23:13:55.027517 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 15 23:13:55.034549 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:13:55.042673 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:13:55.042855 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:13:55.049830 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:13:55.050345 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:13:55.059939 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:13:55.060464 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:13:55.107342 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 15 23:13:55.121407 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 15 23:13:55.139914 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 15 23:13:55.160333 systemd[1]: Finished ensure-sysext.service. Jul 15 23:13:55.165191 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 15 23:13:55.170196 augenrules[1794]: No rules Jul 15 23:13:55.170869 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:13:55.173430 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:13:55.185520 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 23:13:55.193417 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:13:55.207474 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:13:55.211855 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:13:55.217567 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 15 23:13:55.224563 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:13:55.224640 systemd[1]: Reached target time-set.target - System Time Set. Jul 15 23:13:55.233263 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 23:13:55.238770 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 23:13:55.243906 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 23:13:55.244096 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:13:55.252091 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:13:55.252556 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:13:55.253077 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:13:55.258837 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 23:13:55.260377 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 23:13:55.267088 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:13:55.267250 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:13:55.272552 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:13:55.272696 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:13:55.281526 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 23:13:55.281613 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 23:13:55.284487 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:13:55.291536 systemd-resolved[1735]: Positive Trust Anchors: Jul 15 23:13:55.291837 systemd-resolved[1735]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 23:13:55.291905 systemd-resolved[1735]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 23:13:55.306056 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 15 23:13:55.318409 systemd-resolved[1735]: Using system hostname 'ci-4372.0.1-n-7d7ad51cdd'. Jul 15 23:13:55.319666 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 23:13:55.324231 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:13:55.356416 systemd-networkd[1577]: lo: Link UP Jul 15 23:13:55.356424 systemd-networkd[1577]: lo: Gained carrier Jul 15 23:13:55.358109 systemd-networkd[1577]: Enumeration completed Jul 15 23:13:55.358232 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 23:13:55.358971 systemd-networkd[1577]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:13:55.359304 systemd-networkd[1577]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 23:13:55.362911 systemd[1]: Reached target network.target - Network. Jul 15 23:13:55.367812 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 15 23:13:55.377117 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 15 23:13:55.426300 kernel: mlx5_core 6d18:00:02.0 enP27928s1: Link up Jul 15 23:13:55.452484 kernel: hv_netvsc 0022487e-4cd1-0022-487e-4cd10022487e eth0: Data path switched to VF: enP27928s1 Jul 15 23:13:55.454303 systemd-networkd[1577]: enP27928s1: Link UP Jul 15 23:13:55.454563 systemd-networkd[1577]: eth0: Link UP Jul 15 23:13:55.454566 systemd-networkd[1577]: eth0: Gained carrier Jul 15 23:13:55.454587 systemd-networkd[1577]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:13:55.456371 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 15 23:13:55.462599 systemd-networkd[1577]: enP27928s1: Gained carrier Jul 15 23:13:55.468376 systemd-networkd[1577]: eth0: DHCPv4 address 10.200.20.18/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 15 23:13:55.515318 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:13:55.605612 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 15 23:13:55.610904 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 23:13:57.137434 systemd-networkd[1577]: eth0: Gained IPv6LL Jul 15 23:13:57.140374 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 15 23:13:57.146488 systemd[1]: Reached target network-online.target - Network is Online. Jul 15 23:13:57.201374 systemd-networkd[1577]: enP27928s1: Gained IPv6LL Jul 15 23:13:57.611396 ldconfig[1431]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 23:13:57.621683 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 15 23:13:57.628138 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 15 23:13:57.646193 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 15 23:13:57.650881 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 23:13:57.654952 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 15 23:13:57.660908 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 15 23:13:57.666525 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 15 23:13:57.671181 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 15 23:13:57.677020 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 15 23:13:57.682866 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 23:13:57.682894 systemd[1]: Reached target paths.target - Path Units. Jul 15 23:13:57.687587 systemd[1]: Reached target timers.target - Timer Units. Jul 15 23:13:57.693106 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 15 23:13:57.699445 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 15 23:13:57.705639 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 15 23:13:57.711362 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 15 23:13:57.716322 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 15 23:13:57.731070 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 15 23:13:57.736083 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 15 23:13:57.741704 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 15 23:13:57.746292 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 23:13:57.750406 systemd[1]: Reached target basic.target - Basic System. Jul 15 23:13:57.754391 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 15 23:13:57.754411 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 15 23:13:57.756630 systemd[1]: Starting chronyd.service - NTP client/server... Jul 15 23:13:57.771261 systemd[1]: Starting containerd.service - containerd container runtime... Jul 15 23:13:57.777425 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 15 23:13:57.786508 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 15 23:13:57.793803 (chronyd)[1833]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 15 23:13:57.795305 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 15 23:13:57.802972 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 15 23:13:57.810805 jq[1841]: false Jul 15 23:13:57.811070 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 15 23:13:57.815805 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 15 23:13:57.817445 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jul 15 23:13:57.821489 chronyd[1845]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 15 23:13:57.824240 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jul 15 23:13:57.827268 KVP[1843]: KVP starting; pid is:1843 Jul 15 23:13:57.828555 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:13:57.834211 KVP[1843]: KVP LIC Version: 3.1 Jul 15 23:13:57.837300 kernel: hv_utils: KVP IC version 4.0 Jul 15 23:13:57.838689 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 15 23:13:57.844191 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 15 23:13:57.850034 chronyd[1845]: Timezone right/UTC failed leap second check, ignoring Jul 15 23:13:57.850197 chronyd[1845]: Loaded seccomp filter (level 2) Jul 15 23:13:57.857010 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 15 23:13:57.866665 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 15 23:13:57.873406 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 15 23:13:57.883359 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 15 23:13:57.888333 extend-filesystems[1842]: Found /dev/sda6 Jul 15 23:13:57.893106 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 15 23:13:57.894326 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 15 23:13:57.894958 systemd[1]: Starting update-engine.service - Update Engine... Jul 15 23:13:57.905801 extend-filesystems[1842]: Found /dev/sda9 Jul 15 23:13:57.911264 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 15 23:13:57.916653 extend-filesystems[1842]: Checking size of /dev/sda9 Jul 15 23:13:57.921437 systemd[1]: Started chronyd.service - NTP client/server. Jul 15 23:13:57.935154 jq[1871]: true Jul 15 23:13:57.931701 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 15 23:13:57.944665 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 23:13:57.944856 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 15 23:13:57.947446 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 23:13:57.948377 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 15 23:13:57.956776 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 15 23:13:57.970523 extend-filesystems[1842]: Old size kept for /dev/sda9 Jul 15 23:13:57.989164 update_engine[1868]: I20250715 23:13:57.985589 1868 main.cc:92] Flatcar Update Engine starting Jul 15 23:13:57.976132 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 23:13:57.977611 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 15 23:13:57.996133 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 23:13:57.998322 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 15 23:13:58.026724 (ntainerd)[1886]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 15 23:13:58.028523 jq[1885]: true Jul 15 23:13:58.047482 tar[1881]: linux-arm64/LICENSE Jul 15 23:13:58.049757 tar[1881]: linux-arm64/helm Jul 15 23:13:58.061727 systemd-logind[1862]: New seat seat0. Jul 15 23:13:58.064404 systemd-logind[1862]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jul 15 23:13:58.064598 systemd[1]: Started systemd-logind.service - User Login Management. Jul 15 23:13:58.138049 dbus-daemon[1836]: [system] SELinux support is enabled Jul 15 23:13:58.138250 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 15 23:13:58.139923 bash[1929]: Updated "/home/core/.ssh/authorized_keys" Jul 15 23:13:58.148591 update_engine[1868]: I20250715 23:13:58.147337 1868 update_check_scheduler.cc:74] Next update check in 6m51s Jul 15 23:13:58.148034 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 15 23:13:58.157632 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 15 23:13:58.158798 dbus-daemon[1836]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 15 23:13:58.157967 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 23:13:58.158001 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 15 23:13:58.167663 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 23:13:58.167689 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 15 23:13:58.183417 systemd[1]: Started update-engine.service - Update Engine. Jul 15 23:13:58.202601 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 15 23:13:58.222587 coreos-metadata[1835]: Jul 15 23:13:58.221 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 15 23:13:58.230604 coreos-metadata[1835]: Jul 15 23:13:58.228 INFO Fetch successful Jul 15 23:13:58.230604 coreos-metadata[1835]: Jul 15 23:13:58.228 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 15 23:13:58.233371 coreos-metadata[1835]: Jul 15 23:13:58.233 INFO Fetch successful Jul 15 23:13:58.233858 coreos-metadata[1835]: Jul 15 23:13:58.233 INFO Fetching http://168.63.129.16/machine/1a20478c-af53-4514-b60d-5fde41de55ae/ad184607%2D0245%2D4fb4%2Dbe78%2D4d32e51e062a.%5Fci%2D4372.0.1%2Dn%2D7d7ad51cdd?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 15 23:13:58.236272 coreos-metadata[1835]: Jul 15 23:13:58.236 INFO Fetch successful Jul 15 23:13:58.236479 coreos-metadata[1835]: Jul 15 23:13:58.236 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 15 23:13:58.249185 coreos-metadata[1835]: Jul 15 23:13:58.247 INFO Fetch successful Jul 15 23:13:58.297626 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 15 23:13:58.302871 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 15 23:13:58.527324 locksmithd[1966]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 23:13:58.536446 containerd[1886]: time="2025-07-15T23:13:58Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 15 23:13:58.543741 containerd[1886]: time="2025-07-15T23:13:58.543690164Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 15 23:13:58.559442 containerd[1886]: time="2025-07-15T23:13:58.559390972Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.376µs" Jul 15 23:13:58.560313 containerd[1886]: time="2025-07-15T23:13:58.559566244Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 15 23:13:58.560313 containerd[1886]: time="2025-07-15T23:13:58.559593572Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 15 23:13:58.560313 containerd[1886]: time="2025-07-15T23:13:58.559768644Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 15 23:13:58.560313 containerd[1886]: time="2025-07-15T23:13:58.559781996Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 15 23:13:58.560313 containerd[1886]: time="2025-07-15T23:13:58.559801740Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 23:13:58.560313 containerd[1886]: time="2025-07-15T23:13:58.559846004Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 23:13:58.560313 containerd[1886]: time="2025-07-15T23:13:58.559856228Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 23:13:58.560313 containerd[1886]: time="2025-07-15T23:13:58.560093860Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 23:13:58.560313 containerd[1886]: time="2025-07-15T23:13:58.560105556Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 23:13:58.560313 containerd[1886]: time="2025-07-15T23:13:58.560117412Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 23:13:58.560313 containerd[1886]: time="2025-07-15T23:13:58.560122284Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 15 23:13:58.560313 containerd[1886]: time="2025-07-15T23:13:58.560185260Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 15 23:13:58.560871 containerd[1886]: time="2025-07-15T23:13:58.560786140Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 23:13:58.560871 containerd[1886]: time="2025-07-15T23:13:58.560827420Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 23:13:58.560871 containerd[1886]: time="2025-07-15T23:13:58.560834820Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 15 23:13:58.561678 containerd[1886]: time="2025-07-15T23:13:58.561020940Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 15 23:13:58.561678 containerd[1886]: time="2025-07-15T23:13:58.561531636Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 15 23:13:58.561678 containerd[1886]: time="2025-07-15T23:13:58.561647500Z" level=info msg="metadata content store policy set" policy=shared Jul 15 23:13:58.578680 sshd_keygen[1867]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 23:13:58.579502 containerd[1886]: time="2025-07-15T23:13:58.579164428Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 15 23:13:58.579502 containerd[1886]: time="2025-07-15T23:13:58.579248164Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 15 23:13:58.579502 containerd[1886]: time="2025-07-15T23:13:58.579259964Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 15 23:13:58.579502 containerd[1886]: time="2025-07-15T23:13:58.579268924Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 15 23:13:58.579502 containerd[1886]: time="2025-07-15T23:13:58.579329724Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 15 23:13:58.579502 containerd[1886]: time="2025-07-15T23:13:58.579337956Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 15 23:13:58.579502 containerd[1886]: time="2025-07-15T23:13:58.579348604Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 15 23:13:58.579502 containerd[1886]: time="2025-07-15T23:13:58.579357036Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 15 23:13:58.582860 containerd[1886]: time="2025-07-15T23:13:58.582047956Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 15 23:13:58.582993 containerd[1886]: time="2025-07-15T23:13:58.582970276Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 15 23:13:58.583653 containerd[1886]: time="2025-07-15T23:13:58.583096812Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 15 23:13:58.583653 containerd[1886]: time="2025-07-15T23:13:58.583120620Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 15 23:13:58.583790 containerd[1886]: time="2025-07-15T23:13:58.583765884Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 15 23:13:58.583865 containerd[1886]: time="2025-07-15T23:13:58.583850828Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 15 23:13:58.584444 containerd[1886]: time="2025-07-15T23:13:58.583951828Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 15 23:13:58.584444 containerd[1886]: time="2025-07-15T23:13:58.583970820Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 15 23:13:58.584444 containerd[1886]: time="2025-07-15T23:13:58.583980036Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 15 23:13:58.584444 containerd[1886]: time="2025-07-15T23:13:58.583989516Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 15 23:13:58.584444 containerd[1886]: time="2025-07-15T23:13:58.583998884Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 15 23:13:58.584444 containerd[1886]: time="2025-07-15T23:13:58.584005932Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 15 23:13:58.584444 containerd[1886]: time="2025-07-15T23:13:58.584014660Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 15 23:13:58.584444 containerd[1886]: time="2025-07-15T23:13:58.584021332Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 15 23:13:58.584444 containerd[1886]: time="2025-07-15T23:13:58.584031828Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 15 23:13:58.584444 containerd[1886]: time="2025-07-15T23:13:58.584100260Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 15 23:13:58.584444 containerd[1886]: time="2025-07-15T23:13:58.584112860Z" level=info msg="Start snapshots syncer" Jul 15 23:13:58.584678 containerd[1886]: time="2025-07-15T23:13:58.584656100Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 15 23:13:58.585387 containerd[1886]: time="2025-07-15T23:13:58.585022820Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 15 23:13:58.585387 containerd[1886]: time="2025-07-15T23:13:58.585080980Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 15 23:13:58.586705 containerd[1886]: time="2025-07-15T23:13:58.586417148Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 15 23:13:58.586705 containerd[1886]: time="2025-07-15T23:13:58.586597396Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 15 23:13:58.586705 containerd[1886]: time="2025-07-15T23:13:58.586618892Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 15 23:13:58.586705 containerd[1886]: time="2025-07-15T23:13:58.586626444Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 15 23:13:58.586705 containerd[1886]: time="2025-07-15T23:13:58.586635340Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 15 23:13:58.586705 containerd[1886]: time="2025-07-15T23:13:58.586645164Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 15 23:13:58.586705 containerd[1886]: time="2025-07-15T23:13:58.586652132Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 15 23:13:58.586705 containerd[1886]: time="2025-07-15T23:13:58.586659292Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 15 23:13:58.586705 containerd[1886]: time="2025-07-15T23:13:58.586687996Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 15 23:13:58.586900 containerd[1886]: time="2025-07-15T23:13:58.586695172Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 15 23:13:58.586900 containerd[1886]: time="2025-07-15T23:13:58.586736004Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 15 23:13:58.586900 containerd[1886]: time="2025-07-15T23:13:58.586837652Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 23:13:58.586900 containerd[1886]: time="2025-07-15T23:13:58.586857884Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 23:13:58.586900 containerd[1886]: time="2025-07-15T23:13:58.586865876Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 23:13:58.586900 containerd[1886]: time="2025-07-15T23:13:58.586871468Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 23:13:58.586900 containerd[1886]: time="2025-07-15T23:13:58.586876372Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 15 23:13:58.586992 containerd[1886]: time="2025-07-15T23:13:58.586929876Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 15 23:13:58.586992 containerd[1886]: time="2025-07-15T23:13:58.586938724Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 15 23:13:58.586992 containerd[1886]: time="2025-07-15T23:13:58.586952556Z" level=info msg="runtime interface created" Jul 15 23:13:58.586992 containerd[1886]: time="2025-07-15T23:13:58.586955748Z" level=info msg="created NRI interface" Jul 15 23:13:58.586992 containerd[1886]: time="2025-07-15T23:13:58.586961180Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 15 23:13:58.586992 containerd[1886]: time="2025-07-15T23:13:58.586971972Z" level=info msg="Connect containerd service" Jul 15 23:13:58.587063 containerd[1886]: time="2025-07-15T23:13:58.587003556Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 15 23:13:58.589134 containerd[1886]: time="2025-07-15T23:13:58.589048572Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 23:13:58.605995 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 15 23:13:58.618547 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 15 23:13:58.628972 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 15 23:13:58.650806 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 23:13:58.651699 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 15 23:13:58.663651 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 15 23:13:58.682378 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 15 23:13:58.699851 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 15 23:13:58.709616 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 15 23:13:58.719613 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 15 23:13:58.726024 systemd[1]: Reached target getty.target - Login Prompts. Jul 15 23:13:58.849879 tar[1881]: linux-arm64/README.md Jul 15 23:13:58.863018 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 15 23:13:58.895268 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:13:59.050354 (kubelet)[2040]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:13:59.223104 containerd[1886]: time="2025-07-15T23:13:59.222718612Z" level=info msg="Start subscribing containerd event" Jul 15 23:13:59.223389 containerd[1886]: time="2025-07-15T23:13:59.223305612Z" level=info msg="Start recovering state" Jul 15 23:13:59.223883 containerd[1886]: time="2025-07-15T23:13:59.223740164Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 23:13:59.223883 containerd[1886]: time="2025-07-15T23:13:59.223842052Z" level=info msg="Start event monitor" Jul 15 23:13:59.224590 containerd[1886]: time="2025-07-15T23:13:59.223865876Z" level=info msg="Start cni network conf syncer for default" Jul 15 23:13:59.224590 containerd[1886]: time="2025-07-15T23:13:59.224455868Z" level=info msg="Start streaming server" Jul 15 23:13:59.224590 containerd[1886]: time="2025-07-15T23:13:59.224469708Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 15 23:13:59.224590 containerd[1886]: time="2025-07-15T23:13:59.224475388Z" level=info msg="runtime interface starting up..." Jul 15 23:13:59.224590 containerd[1886]: time="2025-07-15T23:13:59.224479476Z" level=info msg="starting plugins..." Jul 15 23:13:59.225044 containerd[1886]: time="2025-07-15T23:13:59.224366620Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 23:13:59.225044 containerd[1886]: time="2025-07-15T23:13:59.224911828Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 15 23:13:59.225880 systemd[1]: Started containerd.service - containerd container runtime. Jul 15 23:13:59.226660 containerd[1886]: time="2025-07-15T23:13:59.225753428Z" level=info msg="containerd successfully booted in 0.689693s" Jul 15 23:13:59.234709 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 15 23:13:59.241033 systemd[1]: Startup finished in 1.704s (kernel) + 10.621s (initrd) + 10.609s (userspace) = 22.935s. Jul 15 23:13:59.365522 kubelet[2040]: E0715 23:13:59.365460 2040 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:13:59.367597 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:13:59.367713 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:13:59.367996 systemd[1]: kubelet.service: Consumed 573ms CPU time, 255.8M memory peak. Jul 15 23:13:59.419081 login[2027]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jul 15 23:13:59.420236 login[2028]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:13:59.425470 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 15 23:13:59.426508 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 15 23:13:59.433352 systemd-logind[1862]: New session 2 of user core. Jul 15 23:13:59.460295 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 15 23:13:59.462944 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 15 23:13:59.475073 (systemd)[2060]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 23:13:59.477416 systemd-logind[1862]: New session c1 of user core. Jul 15 23:13:59.637855 systemd[2060]: Queued start job for default target default.target. Jul 15 23:13:59.645642 systemd[2060]: Created slice app.slice - User Application Slice. Jul 15 23:13:59.645821 systemd[2060]: Reached target paths.target - Paths. Jul 15 23:13:59.645905 systemd[2060]: Reached target timers.target - Timers. Jul 15 23:13:59.647379 systemd[2060]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 15 23:13:59.658518 systemd[2060]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 15 23:13:59.658684 systemd[2060]: Reached target sockets.target - Sockets. Jul 15 23:13:59.658871 systemd[2060]: Reached target basic.target - Basic System. Jul 15 23:13:59.659015 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 15 23:13:59.659966 systemd[2060]: Reached target default.target - Main User Target. Jul 15 23:13:59.660095 systemd[2060]: Startup finished in 177ms. Jul 15 23:13:59.661551 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 15 23:14:00.083585 waagent[2025]: 2025-07-15T23:14:00.083509Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jul 15 23:14:00.088549 waagent[2025]: 2025-07-15T23:14:00.088490Z INFO Daemon Daemon OS: flatcar 4372.0.1 Jul 15 23:14:00.092016 waagent[2025]: 2025-07-15T23:14:00.091980Z INFO Daemon Daemon Python: 3.11.12 Jul 15 23:14:00.095556 waagent[2025]: 2025-07-15T23:14:00.095507Z INFO Daemon Daemon Run daemon Jul 15 23:14:00.098808 waagent[2025]: 2025-07-15T23:14:00.098763Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4372.0.1' Jul 15 23:14:00.105671 waagent[2025]: 2025-07-15T23:14:00.105599Z INFO Daemon Daemon Using waagent for provisioning Jul 15 23:14:00.109800 waagent[2025]: 2025-07-15T23:14:00.109764Z INFO Daemon Daemon Activate resource disk Jul 15 23:14:00.113393 waagent[2025]: 2025-07-15T23:14:00.113362Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 15 23:14:00.122058 waagent[2025]: 2025-07-15T23:14:00.122008Z INFO Daemon Daemon Found device: None Jul 15 23:14:00.125708 waagent[2025]: 2025-07-15T23:14:00.125670Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 15 23:14:00.132265 waagent[2025]: 2025-07-15T23:14:00.132233Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 15 23:14:00.141228 waagent[2025]: 2025-07-15T23:14:00.141184Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 15 23:14:00.145810 waagent[2025]: 2025-07-15T23:14:00.145774Z INFO Daemon Daemon Running default provisioning handler Jul 15 23:14:00.155514 waagent[2025]: 2025-07-15T23:14:00.155453Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 15 23:14:00.165995 waagent[2025]: 2025-07-15T23:14:00.165942Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 15 23:14:00.173468 waagent[2025]: 2025-07-15T23:14:00.173429Z INFO Daemon Daemon cloud-init is enabled: False Jul 15 23:14:00.177111 waagent[2025]: 2025-07-15T23:14:00.177086Z INFO Daemon Daemon Copying ovf-env.xml Jul 15 23:14:00.271118 waagent[2025]: 2025-07-15T23:14:00.270663Z INFO Daemon Daemon Successfully mounted dvd Jul 15 23:14:00.283197 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 15 23:14:00.285304 waagent[2025]: 2025-07-15T23:14:00.285121Z INFO Daemon Daemon Detect protocol endpoint Jul 15 23:14:00.289477 waagent[2025]: 2025-07-15T23:14:00.289430Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 15 23:14:00.294378 waagent[2025]: 2025-07-15T23:14:00.294264Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 15 23:14:00.299396 waagent[2025]: 2025-07-15T23:14:00.299365Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 15 23:14:00.303737 waagent[2025]: 2025-07-15T23:14:00.303702Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 15 23:14:00.308052 waagent[2025]: 2025-07-15T23:14:00.308021Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 15 23:14:00.346298 waagent[2025]: 2025-07-15T23:14:00.346206Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 15 23:14:00.351772 waagent[2025]: 2025-07-15T23:14:00.351750Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 15 23:14:00.355697 waagent[2025]: 2025-07-15T23:14:00.355671Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 15 23:14:00.419859 login[2027]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:14:00.424673 systemd-logind[1862]: New session 1 of user core. Jul 15 23:14:00.429441 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 15 23:14:00.460174 waagent[2025]: 2025-07-15T23:14:00.460078Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 15 23:14:00.465118 waagent[2025]: 2025-07-15T23:14:00.465072Z INFO Daemon Daemon Forcing an update of the goal state. Jul 15 23:14:00.473140 waagent[2025]: 2025-07-15T23:14:00.473098Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 15 23:14:00.490777 waagent[2025]: 2025-07-15T23:14:00.490737Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jul 15 23:14:00.496036 waagent[2025]: 2025-07-15T23:14:00.496000Z INFO Daemon Jul 15 23:14:00.498372 waagent[2025]: 2025-07-15T23:14:00.498344Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 68a5577b-3265-44c6-be3a-8c0c1ff6a50d eTag: 12997558744886751448 source: Fabric] Jul 15 23:14:00.507480 waagent[2025]: 2025-07-15T23:14:00.507447Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 15 23:14:00.512533 waagent[2025]: 2025-07-15T23:14:00.512503Z INFO Daemon Jul 15 23:14:00.514690 waagent[2025]: 2025-07-15T23:14:00.514665Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 15 23:14:00.523723 waagent[2025]: 2025-07-15T23:14:00.523694Z INFO Daemon Daemon Downloading artifacts profile blob Jul 15 23:14:00.591080 waagent[2025]: 2025-07-15T23:14:00.591000Z INFO Daemon Downloaded certificate {'thumbprint': 'BD2F9C170AC16018008A782FD26C530FC5E221F4', 'hasPrivateKey': False} Jul 15 23:14:00.598908 waagent[2025]: 2025-07-15T23:14:00.598837Z INFO Daemon Downloaded certificate {'thumbprint': '481B52AD06A9323EB1A544491C9A13DDD03FE6A5', 'hasPrivateKey': True} Jul 15 23:14:00.606838 waagent[2025]: 2025-07-15T23:14:00.606797Z INFO Daemon Fetch goal state completed Jul 15 23:14:00.618328 waagent[2025]: 2025-07-15T23:14:00.618264Z INFO Daemon Daemon Starting provisioning Jul 15 23:14:00.622497 waagent[2025]: 2025-07-15T23:14:00.622456Z INFO Daemon Daemon Handle ovf-env.xml. Jul 15 23:14:00.626527 waagent[2025]: 2025-07-15T23:14:00.626503Z INFO Daemon Daemon Set hostname [ci-4372.0.1-n-7d7ad51cdd] Jul 15 23:14:00.645522 waagent[2025]: 2025-07-15T23:14:00.645459Z INFO Daemon Daemon Publish hostname [ci-4372.0.1-n-7d7ad51cdd] Jul 15 23:14:00.650554 waagent[2025]: 2025-07-15T23:14:00.650510Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 15 23:14:00.655475 waagent[2025]: 2025-07-15T23:14:00.655438Z INFO Daemon Daemon Primary interface is [eth0] Jul 15 23:14:00.665604 systemd-networkd[1577]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:14:00.665609 systemd-networkd[1577]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 23:14:00.665642 systemd-networkd[1577]: eth0: DHCP lease lost Jul 15 23:14:00.666642 waagent[2025]: 2025-07-15T23:14:00.666582Z INFO Daemon Daemon Create user account if not exists Jul 15 23:14:00.671135 waagent[2025]: 2025-07-15T23:14:00.671091Z INFO Daemon Daemon User core already exists, skip useradd Jul 15 23:14:00.675263 waagent[2025]: 2025-07-15T23:14:00.675223Z INFO Daemon Daemon Configure sudoer Jul 15 23:14:00.684960 waagent[2025]: 2025-07-15T23:14:00.684882Z INFO Daemon Daemon Configure sshd Jul 15 23:14:00.691243 waagent[2025]: 2025-07-15T23:14:00.691168Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 15 23:14:00.692335 systemd-networkd[1577]: eth0: DHCPv4 address 10.200.20.18/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 15 23:14:00.702448 waagent[2025]: 2025-07-15T23:14:00.702382Z INFO Daemon Daemon Deploy ssh public key. Jul 15 23:14:01.804332 waagent[2025]: 2025-07-15T23:14:01.804253Z INFO Daemon Daemon Provisioning complete Jul 15 23:14:01.818956 waagent[2025]: 2025-07-15T23:14:01.818915Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 15 23:14:01.823962 waagent[2025]: 2025-07-15T23:14:01.823919Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 15 23:14:01.831410 waagent[2025]: 2025-07-15T23:14:01.831376Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jul 15 23:14:01.932222 waagent[2114]: 2025-07-15T23:14:01.932143Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jul 15 23:14:01.933059 waagent[2114]: 2025-07-15T23:14:01.932675Z INFO ExtHandler ExtHandler OS: flatcar 4372.0.1 Jul 15 23:14:01.933059 waagent[2114]: 2025-07-15T23:14:01.932734Z INFO ExtHandler ExtHandler Python: 3.11.12 Jul 15 23:14:01.933059 waagent[2114]: 2025-07-15T23:14:01.932775Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Jul 15 23:14:01.952319 waagent[2114]: 2025-07-15T23:14:01.952228Z INFO ExtHandler ExtHandler Distro: flatcar-4372.0.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jul 15 23:14:01.952496 waagent[2114]: 2025-07-15T23:14:01.952468Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 15 23:14:01.952537 waagent[2114]: 2025-07-15T23:14:01.952519Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 15 23:14:01.959499 waagent[2114]: 2025-07-15T23:14:01.959446Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 15 23:14:01.965080 waagent[2114]: 2025-07-15T23:14:01.965047Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jul 15 23:14:01.965524 waagent[2114]: 2025-07-15T23:14:01.965492Z INFO ExtHandler Jul 15 23:14:01.965577 waagent[2114]: 2025-07-15T23:14:01.965560Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: e82ce079-119b-4ae7-a0b8-4e572206ba07 eTag: 12997558744886751448 source: Fabric] Jul 15 23:14:01.965796 waagent[2114]: 2025-07-15T23:14:01.965771Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 15 23:14:01.966181 waagent[2114]: 2025-07-15T23:14:01.966152Z INFO ExtHandler Jul 15 23:14:01.966217 waagent[2114]: 2025-07-15T23:14:01.966202Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 15 23:14:01.969684 waagent[2114]: 2025-07-15T23:14:01.969657Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 15 23:14:02.029220 waagent[2114]: 2025-07-15T23:14:02.029133Z INFO ExtHandler Downloaded certificate {'thumbprint': 'BD2F9C170AC16018008A782FD26C530FC5E221F4', 'hasPrivateKey': False} Jul 15 23:14:02.029595 waagent[2114]: 2025-07-15T23:14:02.029563Z INFO ExtHandler Downloaded certificate {'thumbprint': '481B52AD06A9323EB1A544491C9A13DDD03FE6A5', 'hasPrivateKey': True} Jul 15 23:14:02.029909 waagent[2114]: 2025-07-15T23:14:02.029879Z INFO ExtHandler Fetch goal state completed Jul 15 23:14:02.043267 waagent[2114]: 2025-07-15T23:14:02.043201Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) Jul 15 23:14:02.047184 waagent[2114]: 2025-07-15T23:14:02.047123Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2114 Jul 15 23:14:02.047334 waagent[2114]: 2025-07-15T23:14:02.047310Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 15 23:14:02.047620 waagent[2114]: 2025-07-15T23:14:02.047593Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jul 15 23:14:02.048734 waagent[2114]: 2025-07-15T23:14:02.048696Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4372.0.1', '', 'Flatcar Container Linux by Kinvolk'] Jul 15 23:14:02.049068 waagent[2114]: 2025-07-15T23:14:02.049036Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4372.0.1', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jul 15 23:14:02.049193 waagent[2114]: 2025-07-15T23:14:02.049170Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jul 15 23:14:02.049664 waagent[2114]: 2025-07-15T23:14:02.049633Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 15 23:14:02.120052 waagent[2114]: 2025-07-15T23:14:02.119951Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 15 23:14:02.120178 waagent[2114]: 2025-07-15T23:14:02.120152Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 15 23:14:02.125293 waagent[2114]: 2025-07-15T23:14:02.124903Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 15 23:14:02.130405 systemd[1]: Reload requested from client PID 2131 ('systemctl') (unit waagent.service)... Jul 15 23:14:02.130640 systemd[1]: Reloading... Jul 15 23:14:02.198306 zram_generator::config[2171]: No configuration found. Jul 15 23:14:02.278317 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:14:02.362329 systemd[1]: Reloading finished in 231 ms. Jul 15 23:14:02.381182 waagent[2114]: 2025-07-15T23:14:02.378649Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 15 23:14:02.381182 waagent[2114]: 2025-07-15T23:14:02.378868Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 15 23:14:02.612547 waagent[2114]: 2025-07-15T23:14:02.612465Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 15 23:14:02.612826 waagent[2114]: 2025-07-15T23:14:02.612793Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jul 15 23:14:02.613490 waagent[2114]: 2025-07-15T23:14:02.613449Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 15 23:14:02.613796 waagent[2114]: 2025-07-15T23:14:02.613757Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 15 23:14:02.614078 waagent[2114]: 2025-07-15T23:14:02.614041Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 15 23:14:02.614246 waagent[2114]: 2025-07-15T23:14:02.614209Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 15 23:14:02.614388 waagent[2114]: 2025-07-15T23:14:02.614329Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 15 23:14:02.614681 waagent[2114]: 2025-07-15T23:14:02.614623Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 15 23:14:02.614896 waagent[2114]: 2025-07-15T23:14:02.614848Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 15 23:14:02.615204 waagent[2114]: 2025-07-15T23:14:02.615167Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 15 23:14:02.615434 waagent[2114]: 2025-07-15T23:14:02.615372Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 15 23:14:02.615702 waagent[2114]: 2025-07-15T23:14:02.615666Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 15 23:14:02.615856 waagent[2114]: 2025-07-15T23:14:02.615811Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 15 23:14:02.616650 waagent[2114]: 2025-07-15T23:14:02.616396Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 15 23:14:02.616650 waagent[2114]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 15 23:14:02.616650 waagent[2114]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jul 15 23:14:02.616650 waagent[2114]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 15 23:14:02.616650 waagent[2114]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 15 23:14:02.616650 waagent[2114]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 15 23:14:02.616650 waagent[2114]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 15 23:14:02.617325 waagent[2114]: 2025-07-15T23:14:02.617269Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 15 23:14:02.617494 waagent[2114]: 2025-07-15T23:14:02.617439Z INFO EnvHandler ExtHandler Configure routes Jul 15 23:14:02.617528 waagent[2114]: 2025-07-15T23:14:02.617507Z INFO EnvHandler ExtHandler Gateway:None Jul 15 23:14:02.617543 waagent[2114]: 2025-07-15T23:14:02.617534Z INFO EnvHandler ExtHandler Routes:None Jul 15 23:14:02.622076 waagent[2114]: 2025-07-15T23:14:02.622009Z INFO ExtHandler ExtHandler Jul 15 23:14:02.622484 waagent[2114]: 2025-07-15T23:14:02.622428Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: e298ba2a-7993-4e5e-bd40-0330f57632c2 correlation 2c58b25c-900b-4320-a7cc-5678acdc8592 created: 2025-07-15T23:12:59.768745Z] Jul 15 23:14:02.623421 waagent[2114]: 2025-07-15T23:14:02.623375Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 15 23:14:02.623965 waagent[2114]: 2025-07-15T23:14:02.623927Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jul 15 23:14:02.646802 waagent[2114]: 2025-07-15T23:14:02.646698Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jul 15 23:14:02.646802 waagent[2114]: Try `iptables -h' or 'iptables --help' for more information.) Jul 15 23:14:02.647348 waagent[2114]: 2025-07-15T23:14:02.647308Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 3A898E7E-0FB5-42FC-838B-61DF41A52E7A;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jul 15 23:14:02.658012 waagent[2114]: 2025-07-15T23:14:02.657944Z INFO MonitorHandler ExtHandler Network interfaces: Jul 15 23:14:02.658012 waagent[2114]: Executing ['ip', '-a', '-o', 'link']: Jul 15 23:14:02.658012 waagent[2114]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 15 23:14:02.658012 waagent[2114]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7e:4c:d1 brd ff:ff:ff:ff:ff:ff Jul 15 23:14:02.658012 waagent[2114]: 3: enP27928s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7e:4c:d1 brd ff:ff:ff:ff:ff:ff\ altname enP27928p0s2 Jul 15 23:14:02.658012 waagent[2114]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 15 23:14:02.658012 waagent[2114]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 15 23:14:02.658012 waagent[2114]: 2: eth0 inet 10.200.20.18/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 15 23:14:02.658012 waagent[2114]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 15 23:14:02.658012 waagent[2114]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 15 23:14:02.658012 waagent[2114]: 2: eth0 inet6 fe80::222:48ff:fe7e:4cd1/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 15 23:14:02.658012 waagent[2114]: 3: enP27928s1 inet6 fe80::222:48ff:fe7e:4cd1/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 15 23:14:02.753311 waagent[2114]: 2025-07-15T23:14:02.752913Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jul 15 23:14:02.753311 waagent[2114]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 15 23:14:02.753311 waagent[2114]: pkts bytes target prot opt in out source destination Jul 15 23:14:02.753311 waagent[2114]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 15 23:14:02.753311 waagent[2114]: pkts bytes target prot opt in out source destination Jul 15 23:14:02.753311 waagent[2114]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 15 23:14:02.753311 waagent[2114]: pkts bytes target prot opt in out source destination Jul 15 23:14:02.753311 waagent[2114]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 15 23:14:02.753311 waagent[2114]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 15 23:14:02.753311 waagent[2114]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 15 23:14:02.755440 waagent[2114]: 2025-07-15T23:14:02.755392Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 15 23:14:02.755440 waagent[2114]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 15 23:14:02.755440 waagent[2114]: pkts bytes target prot opt in out source destination Jul 15 23:14:02.755440 waagent[2114]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 15 23:14:02.755440 waagent[2114]: pkts bytes target prot opt in out source destination Jul 15 23:14:02.755440 waagent[2114]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 15 23:14:02.755440 waagent[2114]: pkts bytes target prot opt in out source destination Jul 15 23:14:02.755440 waagent[2114]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 15 23:14:02.755440 waagent[2114]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 15 23:14:02.755440 waagent[2114]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 15 23:14:02.755643 waagent[2114]: 2025-07-15T23:14:02.755617Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jul 15 23:14:09.475720 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 23:14:09.477127 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:14:09.575895 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:14:09.582849 (kubelet)[2264]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:14:09.715067 kubelet[2264]: E0715 23:14:09.715013 2264 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:14:09.718037 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:14:09.718152 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:14:09.718619 systemd[1]: kubelet.service: Consumed 113ms CPU time, 107.2M memory peak. Jul 15 23:14:16.636614 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 15 23:14:16.637824 systemd[1]: Started sshd@0-10.200.20.18:22-10.200.16.10:54196.service - OpenSSH per-connection server daemon (10.200.16.10:54196). Jul 15 23:14:17.181653 sshd[2271]: Accepted publickey for core from 10.200.16.10 port 54196 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:14:17.182775 sshd-session[2271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:14:17.186661 systemd-logind[1862]: New session 3 of user core. Jul 15 23:14:17.197650 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 15 23:14:17.597363 systemd[1]: Started sshd@1-10.200.20.18:22-10.200.16.10:54208.service - OpenSSH per-connection server daemon (10.200.16.10:54208). Jul 15 23:14:18.093458 sshd[2276]: Accepted publickey for core from 10.200.16.10 port 54208 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:14:18.094656 sshd-session[2276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:14:18.098593 systemd-logind[1862]: New session 4 of user core. Jul 15 23:14:18.108463 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 15 23:14:18.445796 sshd[2278]: Connection closed by 10.200.16.10 port 54208 Jul 15 23:14:18.446347 sshd-session[2276]: pam_unix(sshd:session): session closed for user core Jul 15 23:14:18.450042 systemd[1]: sshd@1-10.200.20.18:22-10.200.16.10:54208.service: Deactivated successfully. Jul 15 23:14:18.451567 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 23:14:18.452212 systemd-logind[1862]: Session 4 logged out. Waiting for processes to exit. Jul 15 23:14:18.453687 systemd-logind[1862]: Removed session 4. Jul 15 23:14:18.520156 systemd[1]: Started sshd@2-10.200.20.18:22-10.200.16.10:54222.service - OpenSSH per-connection server daemon (10.200.16.10:54222). Jul 15 23:14:18.954458 sshd[2284]: Accepted publickey for core from 10.200.16.10 port 54222 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:14:18.955591 sshd-session[2284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:14:18.959566 systemd-logind[1862]: New session 5 of user core. Jul 15 23:14:18.969583 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 15 23:14:19.276928 sshd[2286]: Connection closed by 10.200.16.10 port 54222 Jul 15 23:14:19.277504 sshd-session[2284]: pam_unix(sshd:session): session closed for user core Jul 15 23:14:19.280732 systemd[1]: sshd@2-10.200.20.18:22-10.200.16.10:54222.service: Deactivated successfully. Jul 15 23:14:19.282155 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 23:14:19.282788 systemd-logind[1862]: Session 5 logged out. Waiting for processes to exit. Jul 15 23:14:19.283872 systemd-logind[1862]: Removed session 5. Jul 15 23:14:19.359040 systemd[1]: Started sshd@3-10.200.20.18:22-10.200.16.10:54228.service - OpenSSH per-connection server daemon (10.200.16.10:54228). Jul 15 23:14:19.725560 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 15 23:14:19.729454 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:14:19.792306 sshd[2292]: Accepted publickey for core from 10.200.16.10 port 54228 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:14:19.793450 sshd-session[2292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:14:19.798554 systemd-logind[1862]: New session 6 of user core. Jul 15 23:14:19.807508 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 15 23:14:19.836326 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:14:19.844660 (kubelet)[2303]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:14:19.967157 kubelet[2303]: E0715 23:14:19.967099 2303 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:14:19.969553 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:14:19.969794 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:14:19.970381 systemd[1]: kubelet.service: Consumed 109ms CPU time, 105.7M memory peak. Jul 15 23:14:20.117401 sshd[2297]: Connection closed by 10.200.16.10 port 54228 Jul 15 23:14:20.118089 sshd-session[2292]: pam_unix(sshd:session): session closed for user core Jul 15 23:14:20.121264 systemd-logind[1862]: Session 6 logged out. Waiting for processes to exit. Jul 15 23:14:20.121797 systemd[1]: sshd@3-10.200.20.18:22-10.200.16.10:54228.service: Deactivated successfully. Jul 15 23:14:20.123255 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 23:14:20.124847 systemd-logind[1862]: Removed session 6. Jul 15 23:14:20.195298 systemd[1]: Started sshd@4-10.200.20.18:22-10.200.16.10:55150.service - OpenSSH per-connection server daemon (10.200.16.10:55150). Jul 15 23:14:20.628322 sshd[2314]: Accepted publickey for core from 10.200.16.10 port 55150 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:14:20.629508 sshd-session[2314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:14:20.633621 systemd-logind[1862]: New session 7 of user core. Jul 15 23:14:20.639455 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 15 23:14:20.944941 sudo[2317]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 15 23:14:20.945158 sudo[2317]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:14:20.959165 sudo[2317]: pam_unix(sudo:session): session closed for user root Jul 15 23:14:21.040256 sshd[2316]: Connection closed by 10.200.16.10 port 55150 Jul 15 23:14:21.040966 sshd-session[2314]: pam_unix(sshd:session): session closed for user core Jul 15 23:14:21.044763 systemd[1]: sshd@4-10.200.20.18:22-10.200.16.10:55150.service: Deactivated successfully. Jul 15 23:14:21.046524 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 23:14:21.047200 systemd-logind[1862]: Session 7 logged out. Waiting for processes to exit. Jul 15 23:14:21.048880 systemd-logind[1862]: Removed session 7. Jul 15 23:14:21.139060 systemd[1]: Started sshd@5-10.200.20.18:22-10.200.16.10:55154.service - OpenSSH per-connection server daemon (10.200.16.10:55154). Jul 15 23:14:21.593990 sshd[2323]: Accepted publickey for core from 10.200.16.10 port 55154 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:14:21.595192 sshd-session[2323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:14:21.599095 systemd-logind[1862]: New session 8 of user core. Jul 15 23:14:21.607558 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 15 23:14:21.647302 chronyd[1845]: Selected source PHC0 Jul 15 23:14:21.849746 sudo[2327]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 15 23:14:21.850408 sudo[2327]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:14:21.857404 sudo[2327]: pam_unix(sudo:session): session closed for user root Jul 15 23:14:21.861697 sudo[2326]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 15 23:14:21.861924 sudo[2326]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:14:21.869793 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 23:14:21.900848 augenrules[2349]: No rules Jul 15 23:14:21.902248 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 23:14:21.902476 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 23:14:21.905043 sudo[2326]: pam_unix(sudo:session): session closed for user root Jul 15 23:14:21.991045 sshd[2325]: Connection closed by 10.200.16.10 port 55154 Jul 15 23:14:21.990391 sshd-session[2323]: pam_unix(sshd:session): session closed for user core Jul 15 23:14:21.993886 systemd-logind[1862]: Session 8 logged out. Waiting for processes to exit. Jul 15 23:14:21.994546 systemd[1]: sshd@5-10.200.20.18:22-10.200.16.10:55154.service: Deactivated successfully. Jul 15 23:14:21.996728 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 23:14:21.998520 systemd-logind[1862]: Removed session 8. Jul 15 23:14:22.076254 systemd[1]: Started sshd@6-10.200.20.18:22-10.200.16.10:55164.service - OpenSSH per-connection server daemon (10.200.16.10:55164). Jul 15 23:14:22.551936 sshd[2358]: Accepted publickey for core from 10.200.16.10 port 55164 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:14:22.553052 sshd-session[2358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:14:22.556772 systemd-logind[1862]: New session 9 of user core. Jul 15 23:14:22.567428 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 15 23:14:22.816955 sudo[2361]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 23:14:22.817609 sudo[2361]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:14:23.744529 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 15 23:14:23.755603 (dockerd)[2378]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 15 23:14:24.243455 dockerd[2378]: time="2025-07-15T23:14:24.243396450Z" level=info msg="Starting up" Jul 15 23:14:24.245405 dockerd[2378]: time="2025-07-15T23:14:24.245258178Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 15 23:14:24.290425 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2845535581-merged.mount: Deactivated successfully. Jul 15 23:14:24.376869 dockerd[2378]: time="2025-07-15T23:14:24.376658162Z" level=info msg="Loading containers: start." Jul 15 23:14:24.388309 kernel: Initializing XFRM netlink socket Jul 15 23:14:24.653687 systemd-networkd[1577]: docker0: Link UP Jul 15 23:14:24.668312 dockerd[2378]: time="2025-07-15T23:14:24.668140586Z" level=info msg="Loading containers: done." Jul 15 23:14:24.688214 dockerd[2378]: time="2025-07-15T23:14:24.688156730Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 23:14:24.688396 dockerd[2378]: time="2025-07-15T23:14:24.688259330Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 15 23:14:24.688426 dockerd[2378]: time="2025-07-15T23:14:24.688412418Z" level=info msg="Initializing buildkit" Jul 15 23:14:24.747924 dockerd[2378]: time="2025-07-15T23:14:24.747837610Z" level=info msg="Completed buildkit initialization" Jul 15 23:14:24.753026 dockerd[2378]: time="2025-07-15T23:14:24.752972370Z" level=info msg="Daemon has completed initialization" Jul 15 23:14:24.753769 dockerd[2378]: time="2025-07-15T23:14:24.753311194Z" level=info msg="API listen on /run/docker.sock" Jul 15 23:14:24.753475 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 15 23:14:25.288347 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck86366039-merged.mount: Deactivated successfully. Jul 15 23:14:25.367492 containerd[1886]: time="2025-07-15T23:14:25.367405058Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\"" Jul 15 23:14:26.220580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1743568215.mount: Deactivated successfully. Jul 15 23:14:27.390329 containerd[1886]: time="2025-07-15T23:14:27.389874546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:27.392244 containerd[1886]: time="2025-07-15T23:14:27.392064314Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.3: active requests=0, bytes read=27352094" Jul 15 23:14:27.395422 containerd[1886]: time="2025-07-15T23:14:27.395395946Z" level=info msg="ImageCreate event name:\"sha256:c0425f3fe3fbf33c17a14d49c43d4fd0b60b2254511902d5b2c29e53ca684fc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:27.399255 containerd[1886]: time="2025-07-15T23:14:27.399194114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:27.399892 containerd[1886]: time="2025-07-15T23:14:27.399673954Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.3\" with image id \"sha256:c0425f3fe3fbf33c17a14d49c43d4fd0b60b2254511902d5b2c29e53ca684fc9\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\", size \"27348894\" in 2.032229224s" Jul 15 23:14:27.399892 containerd[1886]: time="2025-07-15T23:14:27.399710306Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\" returns image reference \"sha256:c0425f3fe3fbf33c17a14d49c43d4fd0b60b2254511902d5b2c29e53ca684fc9\"" Jul 15 23:14:27.401015 containerd[1886]: time="2025-07-15T23:14:27.400970538Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\"" Jul 15 23:14:28.982317 containerd[1886]: time="2025-07-15T23:14:28.981745778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:28.990329 containerd[1886]: time="2025-07-15T23:14:28.990289186Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.3: active requests=0, bytes read=23537846" Jul 15 23:14:28.995242 containerd[1886]: time="2025-07-15T23:14:28.995215162Z" level=info msg="ImageCreate event name:\"sha256:ef439b94d49d41d1b377c316fb053adb88bf6b26ec7e63aaf3deba953b7c766f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:29.003163 containerd[1886]: time="2025-07-15T23:14:29.003078970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:29.004097 containerd[1886]: time="2025-07-15T23:14:29.003664386Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.3\" with image id \"sha256:ef439b94d49d41d1b377c316fb053adb88bf6b26ec7e63aaf3deba953b7c766f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\", size \"25092764\" in 1.602669064s" Jul 15 23:14:29.004097 containerd[1886]: time="2025-07-15T23:14:29.003698498Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\" returns image reference \"sha256:ef439b94d49d41d1b377c316fb053adb88bf6b26ec7e63aaf3deba953b7c766f\"" Jul 15 23:14:29.004217 containerd[1886]: time="2025-07-15T23:14:29.004197586Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\"" Jul 15 23:14:29.975597 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 15 23:14:29.976933 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:14:30.086518 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:14:30.092838 (kubelet)[2641]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:14:30.206602 kubelet[2641]: E0715 23:14:30.206547 2641 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:14:30.208939 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:14:30.209057 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:14:30.211357 systemd[1]: kubelet.service: Consumed 110ms CPU time, 105.3M memory peak. Jul 15 23:14:31.067343 containerd[1886]: time="2025-07-15T23:14:31.067271604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:31.070980 containerd[1886]: time="2025-07-15T23:14:31.070941502Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.3: active requests=0, bytes read=18293524" Jul 15 23:14:31.078818 containerd[1886]: time="2025-07-15T23:14:31.078789847Z" level=info msg="ImageCreate event name:\"sha256:c03972dff86ba78247043f2b6171ce436ab9323da7833b18924c3d8e29ea37a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:31.085460 containerd[1886]: time="2025-07-15T23:14:31.085397127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:31.085822 containerd[1886]: time="2025-07-15T23:14:31.085694527Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.3\" with image id \"sha256:c03972dff86ba78247043f2b6171ce436ab9323da7833b18924c3d8e29ea37a5\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\", size \"19848460\" in 2.081471949s" Jul 15 23:14:31.085822 containerd[1886]: time="2025-07-15T23:14:31.085726744Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\" returns image reference \"sha256:c03972dff86ba78247043f2b6171ce436ab9323da7833b18924c3d8e29ea37a5\"" Jul 15 23:14:31.086424 containerd[1886]: time="2025-07-15T23:14:31.086400858Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\"" Jul 15 23:14:32.176526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2336728595.mount: Deactivated successfully. Jul 15 23:14:32.467888 containerd[1886]: time="2025-07-15T23:14:32.467754832Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:32.476073 containerd[1886]: time="2025-07-15T23:14:32.476026956Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.3: active requests=0, bytes read=28199472" Jul 15 23:14:32.479067 containerd[1886]: time="2025-07-15T23:14:32.479033061Z" level=info msg="ImageCreate event name:\"sha256:738e99dbd7325e2cdd650d83d59a79c7ecb005ab0d5bf029fc15c54ee9359306\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:32.483504 containerd[1886]: time="2025-07-15T23:14:32.483444962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:32.484022 containerd[1886]: time="2025-07-15T23:14:32.483657680Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.3\" with image id \"sha256:738e99dbd7325e2cdd650d83d59a79c7ecb005ab0d5bf029fc15c54ee9359306\", repo tag \"registry.k8s.io/kube-proxy:v1.33.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\", size \"28198491\" in 1.397232253s" Jul 15 23:14:32.484022 containerd[1886]: time="2025-07-15T23:14:32.483685449Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\" returns image reference \"sha256:738e99dbd7325e2cdd650d83d59a79c7ecb005ab0d5bf029fc15c54ee9359306\"" Jul 15 23:14:32.484159 containerd[1886]: time="2025-07-15T23:14:32.484134597Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 15 23:14:33.124981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3308049006.mount: Deactivated successfully. Jul 15 23:14:34.734311 containerd[1886]: time="2025-07-15T23:14:34.733691522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:34.735718 containerd[1886]: time="2025-07-15T23:14:34.735692647Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Jul 15 23:14:34.739597 containerd[1886]: time="2025-07-15T23:14:34.739573855Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:34.743106 containerd[1886]: time="2025-07-15T23:14:34.743079996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:34.743763 containerd[1886]: time="2025-07-15T23:14:34.743655532Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 2.259493695s" Jul 15 23:14:34.743763 containerd[1886]: time="2025-07-15T23:14:34.743683957Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jul 15 23:14:34.744403 containerd[1886]: time="2025-07-15T23:14:34.744345646Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 23:14:35.336865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3266806049.mount: Deactivated successfully. Jul 15 23:14:35.366606 containerd[1886]: time="2025-07-15T23:14:35.366547634Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:14:35.369604 containerd[1886]: time="2025-07-15T23:14:35.369561122Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 15 23:14:35.375211 containerd[1886]: time="2025-07-15T23:14:35.375154759Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:14:35.380263 containerd[1886]: time="2025-07-15T23:14:35.380202302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:14:35.380648 containerd[1886]: time="2025-07-15T23:14:35.380527711Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 636.156224ms" Jul 15 23:14:35.380648 containerd[1886]: time="2025-07-15T23:14:35.380555743Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 15 23:14:35.380986 containerd[1886]: time="2025-07-15T23:14:35.380959578Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 15 23:14:36.042419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount109832565.mount: Deactivated successfully. Jul 15 23:14:38.446255 containerd[1886]: time="2025-07-15T23:14:38.445539491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:38.450024 containerd[1886]: time="2025-07-15T23:14:38.449994417Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334599" Jul 15 23:14:38.455448 containerd[1886]: time="2025-07-15T23:14:38.455418617Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:38.460014 containerd[1886]: time="2025-07-15T23:14:38.459976465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:38.460655 containerd[1886]: time="2025-07-15T23:14:38.460626570Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 3.079639471s" Jul 15 23:14:38.460760 containerd[1886]: time="2025-07-15T23:14:38.460743582Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jul 15 23:14:40.227636 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 15 23:14:40.231474 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:14:40.334407 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:14:40.347606 (kubelet)[2798]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:14:40.441923 kubelet[2798]: E0715 23:14:40.441848 2798 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:14:40.444534 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:14:40.444836 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:14:40.447351 systemd[1]: kubelet.service: Consumed 107ms CPU time, 104.6M memory peak. Jul 15 23:14:41.843249 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:14:41.843457 systemd[1]: kubelet.service: Consumed 107ms CPU time, 104.6M memory peak. Jul 15 23:14:41.848350 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:14:41.866930 systemd[1]: Reload requested from client PID 2813 ('systemctl') (unit session-9.scope)... Jul 15 23:14:41.866946 systemd[1]: Reloading... Jul 15 23:14:41.970441 zram_generator::config[2871]: No configuration found. Jul 15 23:14:42.032929 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:14:42.118181 systemd[1]: Reloading finished in 250 ms. Jul 15 23:14:42.170999 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 15 23:14:42.171063 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 15 23:14:42.171410 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:14:42.171456 systemd[1]: kubelet.service: Consumed 78ms CPU time, 95M memory peak. Jul 15 23:14:42.173017 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:14:42.430542 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:14:42.433409 (kubelet)[2926]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 23:14:42.458419 kubelet[2926]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:14:42.458419 kubelet[2926]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 23:14:42.458419 kubelet[2926]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:14:42.458815 kubelet[2926]: I0715 23:14:42.458454 2926 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 23:14:42.877836 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jul 15 23:14:43.324693 update_engine[1868]: I20250715 23:14:43.324222 1868 update_attempter.cc:509] Updating boot flags... Jul 15 23:14:44.102064 kubelet[2926]: I0715 23:14:44.101624 2926 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 15 23:14:44.102422 kubelet[2926]: I0715 23:14:44.102408 2926 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 23:14:44.102723 kubelet[2926]: I0715 23:14:44.102705 2926 server.go:956] "Client rotation is on, will bootstrap in background" Jul 15 23:14:44.115633 kubelet[2926]: I0715 23:14:44.115581 2926 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 23:14:44.117113 kubelet[2926]: E0715 23:14:44.117090 2926 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 15 23:14:44.127846 kubelet[2926]: I0715 23:14:44.127807 2926 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 23:14:44.133520 kubelet[2926]: I0715 23:14:44.133493 2926 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 23:14:44.135304 kubelet[2926]: I0715 23:14:44.134457 2926 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 23:14:44.135304 kubelet[2926]: I0715 23:14:44.134501 2926 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.0.1-n-7d7ad51cdd","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 23:14:44.135304 kubelet[2926]: I0715 23:14:44.134633 2926 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 23:14:44.135304 kubelet[2926]: I0715 23:14:44.134640 2926 container_manager_linux.go:303] "Creating device plugin manager" Jul 15 23:14:44.135629 kubelet[2926]: I0715 23:14:44.135605 2926 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:14:44.138979 kubelet[2926]: I0715 23:14:44.138935 2926 kubelet.go:480] "Attempting to sync node with API server" Jul 15 23:14:44.138979 kubelet[2926]: I0715 23:14:44.138971 2926 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 23:14:44.139078 kubelet[2926]: I0715 23:14:44.139000 2926 kubelet.go:386] "Adding apiserver pod source" Jul 15 23:14:44.139961 kubelet[2926]: I0715 23:14:44.139931 2926 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 23:14:44.143020 kubelet[2926]: E0715 23:14:44.142968 2926 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.1-n-7d7ad51cdd&limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 15 23:14:44.145259 kubelet[2926]: E0715 23:14:44.144791 2926 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 15 23:14:44.145820 kubelet[2926]: I0715 23:14:44.145784 2926 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 15 23:14:44.146251 kubelet[2926]: I0715 23:14:44.146220 2926 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 15 23:14:44.146855 kubelet[2926]: W0715 23:14:44.146288 2926 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 23:14:44.154765 kubelet[2926]: I0715 23:14:44.154738 2926 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 23:14:44.154860 kubelet[2926]: I0715 23:14:44.154776 2926 server.go:1289] "Started kubelet" Jul 15 23:14:44.158131 kubelet[2926]: I0715 23:14:44.157407 2926 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 23:14:44.161605 kubelet[2926]: I0715 23:14:44.161521 2926 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 23:14:44.162463 kubelet[2926]: I0715 23:14:44.161893 2926 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 23:14:44.163406 kubelet[2926]: E0715 23:14:44.161995 2926 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.18:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.18:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372.0.1-n-7d7ad51cdd.18528fc5abd54b51 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372.0.1-n-7d7ad51cdd,UID:ci-4372.0.1-n-7d7ad51cdd,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372.0.1-n-7d7ad51cdd,},FirstTimestamp:2025-07-15 23:14:44.154755921 +0000 UTC m=+1.718312214,LastTimestamp:2025-07-15 23:14:44.154755921 +0000 UTC m=+1.718312214,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372.0.1-n-7d7ad51cdd,}" Jul 15 23:14:44.164511 kubelet[2926]: I0715 23:14:44.164492 2926 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 23:14:44.166789 kubelet[2926]: I0715 23:14:44.166708 2926 server.go:317] "Adding debug handlers to kubelet server" Jul 15 23:14:44.169760 kubelet[2926]: I0715 23:14:44.169727 2926 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 23:14:44.174403 kubelet[2926]: E0715 23:14:44.174349 2926 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 23:14:44.175024 kubelet[2926]: E0715 23:14:44.174500 2926 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.1-n-7d7ad51cdd\" not found" Jul 15 23:14:44.175024 kubelet[2926]: I0715 23:14:44.174592 2926 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 23:14:44.175024 kubelet[2926]: I0715 23:14:44.174802 2926 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 23:14:44.175024 kubelet[2926]: I0715 23:14:44.174855 2926 reconciler.go:26] "Reconciler: start to sync state" Jul 15 23:14:44.175299 kubelet[2926]: E0715 23:14:44.175263 2926 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 15 23:14:44.175702 kubelet[2926]: E0715 23:14:44.175632 2926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.1-n-7d7ad51cdd?timeout=10s\": dial tcp 10.200.20.18:6443: connect: connection refused" interval="200ms" Jul 15 23:14:44.178106 kubelet[2926]: I0715 23:14:44.178067 2926 factory.go:223] Registration of the containerd container factory successfully Jul 15 23:14:44.178106 kubelet[2926]: I0715 23:14:44.178089 2926 factory.go:223] Registration of the systemd container factory successfully Jul 15 23:14:44.178188 kubelet[2926]: I0715 23:14:44.178162 2926 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 23:14:44.218821 kubelet[2926]: I0715 23:14:44.218591 2926 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 23:14:44.218821 kubelet[2926]: I0715 23:14:44.218622 2926 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 23:14:44.218821 kubelet[2926]: I0715 23:14:44.218639 2926 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:14:44.228250 kubelet[2926]: I0715 23:14:44.228228 2926 policy_none.go:49] "None policy: Start" Jul 15 23:14:44.228459 kubelet[2926]: I0715 23:14:44.228448 2926 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 23:14:44.228612 kubelet[2926]: I0715 23:14:44.228601 2926 state_mem.go:35] "Initializing new in-memory state store" Jul 15 23:14:44.229839 kubelet[2926]: I0715 23:14:44.229814 2926 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 15 23:14:44.234270 kubelet[2926]: I0715 23:14:44.233993 2926 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 15 23:14:44.234270 kubelet[2926]: I0715 23:14:44.234015 2926 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 15 23:14:44.234270 kubelet[2926]: I0715 23:14:44.234035 2926 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 23:14:44.234270 kubelet[2926]: I0715 23:14:44.234041 2926 kubelet.go:2436] "Starting kubelet main sync loop" Jul 15 23:14:44.234270 kubelet[2926]: E0715 23:14:44.234077 2926 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 23:14:44.236314 kubelet[2926]: E0715 23:14:44.236290 2926 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 15 23:14:44.243107 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 15 23:14:44.262513 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 15 23:14:44.265469 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 15 23:14:44.275124 kubelet[2926]: E0715 23:14:44.275091 2926 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 15 23:14:44.277348 kubelet[2926]: E0715 23:14:44.275384 2926 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.1-n-7d7ad51cdd\" not found" Jul 15 23:14:44.277348 kubelet[2926]: I0715 23:14:44.276348 2926 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 23:14:44.277552 kubelet[2926]: I0715 23:14:44.276366 2926 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 23:14:44.279483 kubelet[2926]: I0715 23:14:44.279076 2926 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 23:14:44.282652 kubelet[2926]: E0715 23:14:44.282463 2926 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 23:14:44.282652 kubelet[2926]: E0715 23:14:44.282501 2926 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372.0.1-n-7d7ad51cdd\" not found" Jul 15 23:14:44.375841 kubelet[2926]: I0715 23:14:44.375658 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/40a4bf750e575ac9659ad35c61d9b505-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.0.1-n-7d7ad51cdd\" (UID: \"40a4bf750e575ac9659ad35c61d9b505\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:44.375841 kubelet[2926]: I0715 23:14:44.375749 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6b88a4eea81748905cff15c959ce82cf-k8s-certs\") pod \"kube-apiserver-ci-4372.0.1-n-7d7ad51cdd\" (UID: \"6b88a4eea81748905cff15c959ce82cf\") " pod="kube-system/kube-apiserver-ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:44.375841 kubelet[2926]: I0715 23:14:44.375765 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/40a4bf750e575ac9659ad35c61d9b505-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.0.1-n-7d7ad51cdd\" (UID: \"40a4bf750e575ac9659ad35c61d9b505\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:44.375841 kubelet[2926]: I0715 23:14:44.375777 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/40a4bf750e575ac9659ad35c61d9b505-kubeconfig\") pod \"kube-controller-manager-ci-4372.0.1-n-7d7ad51cdd\" (UID: \"40a4bf750e575ac9659ad35c61d9b505\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:44.375841 kubelet[2926]: I0715 23:14:44.375787 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/514057d7ef70280bd7e47d3fb5ef64a4-kubeconfig\") pod \"kube-scheduler-ci-4372.0.1-n-7d7ad51cdd\" (UID: \"514057d7ef70280bd7e47d3fb5ef64a4\") " pod="kube-system/kube-scheduler-ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:44.376006 kubelet[2926]: I0715 23:14:44.375796 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6b88a4eea81748905cff15c959ce82cf-ca-certs\") pod \"kube-apiserver-ci-4372.0.1-n-7d7ad51cdd\" (UID: \"6b88a4eea81748905cff15c959ce82cf\") " pod="kube-system/kube-apiserver-ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:44.377572 kubelet[2926]: E0715 23:14:44.376096 2926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.1-n-7d7ad51cdd?timeout=10s\": dial tcp 10.200.20.18:6443: connect: connection refused" interval="400ms" Jul 15 23:14:44.377572 kubelet[2926]: I0715 23:14:44.376343 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6b88a4eea81748905cff15c959ce82cf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.0.1-n-7d7ad51cdd\" (UID: \"6b88a4eea81748905cff15c959ce82cf\") " pod="kube-system/kube-apiserver-ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:44.377572 kubelet[2926]: I0715 23:14:44.376456 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/40a4bf750e575ac9659ad35c61d9b505-ca-certs\") pod \"kube-controller-manager-ci-4372.0.1-n-7d7ad51cdd\" (UID: \"40a4bf750e575ac9659ad35c61d9b505\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:44.377572 kubelet[2926]: I0715 23:14:44.376472 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/40a4bf750e575ac9659ad35c61d9b505-k8s-certs\") pod \"kube-controller-manager-ci-4372.0.1-n-7d7ad51cdd\" (UID: \"40a4bf750e575ac9659ad35c61d9b505\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:44.380855 systemd[1]: Created slice kubepods-burstable-pod6b88a4eea81748905cff15c959ce82cf.slice - libcontainer container kubepods-burstable-pod6b88a4eea81748905cff15c959ce82cf.slice. Jul 15 23:14:44.383065 kubelet[2926]: I0715 23:14:44.383004 2926 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:44.383509 kubelet[2926]: E0715 23:14:44.383445 2926 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.18:6443/api/v1/nodes\": dial tcp 10.200.20.18:6443: connect: connection refused" node="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:44.391708 kubelet[2926]: E0715 23:14:44.391222 2926 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-n-7d7ad51cdd\" not found" node="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:44.398437 systemd[1]: Created slice kubepods-burstable-pod40a4bf750e575ac9659ad35c61d9b505.slice - libcontainer container kubepods-burstable-pod40a4bf750e575ac9659ad35c61d9b505.slice. Jul 15 23:14:44.402131 kubelet[2926]: E0715 23:14:44.402079 2926 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-n-7d7ad51cdd\" not found" node="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:44.405058 systemd[1]: Created slice kubepods-burstable-pod514057d7ef70280bd7e47d3fb5ef64a4.slice - libcontainer container kubepods-burstable-pod514057d7ef70280bd7e47d3fb5ef64a4.slice. Jul 15 23:14:44.408643 kubelet[2926]: E0715 23:14:44.408604 2926 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-n-7d7ad51cdd\" not found" node="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:44.585816 kubelet[2926]: I0715 23:14:44.585772 2926 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:44.586363 kubelet[2926]: E0715 23:14:44.586331 2926 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.18:6443/api/v1/nodes\": dial tcp 10.200.20.18:6443: connect: connection refused" node="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:44.693998 containerd[1886]: time="2025-07-15T23:14:44.693889631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.0.1-n-7d7ad51cdd,Uid:6b88a4eea81748905cff15c959ce82cf,Namespace:kube-system,Attempt:0,}" Jul 15 23:14:44.703622 containerd[1886]: time="2025-07-15T23:14:44.703575911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.0.1-n-7d7ad51cdd,Uid:40a4bf750e575ac9659ad35c61d9b505,Namespace:kube-system,Attempt:0,}" Jul 15 23:14:44.709363 containerd[1886]: time="2025-07-15T23:14:44.709325863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.0.1-n-7d7ad51cdd,Uid:514057d7ef70280bd7e47d3fb5ef64a4,Namespace:kube-system,Attempt:0,}" Jul 15 23:14:44.774591 containerd[1886]: time="2025-07-15T23:14:44.774515756Z" level=info msg="connecting to shim 8a1ec32ee254c3a3075ae294035bb6f9f402c108c089f5076d8087bd0bb44ffc" address="unix:///run/containerd/s/a81ebd6957f906b629ff93d6d9fac902771f9feaa7ad70c0b1a893d939d53607" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:14:44.777448 kubelet[2926]: E0715 23:14:44.777399 2926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.1-n-7d7ad51cdd?timeout=10s\": dial tcp 10.200.20.18:6443: connect: connection refused" interval="800ms" Jul 15 23:14:44.790604 containerd[1886]: time="2025-07-15T23:14:44.790487891Z" level=info msg="connecting to shim d4d9d33e41e6f6eb89dbdb45de6022ac942fdd4cecbc83952ea7566510ceb29f" address="unix:///run/containerd/s/b8cd5ccc8519c55e314c51a6db71f9ab7eeaee00e9eeda58f9391ce1fb70b32a" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:14:44.801526 systemd[1]: Started cri-containerd-8a1ec32ee254c3a3075ae294035bb6f9f402c108c089f5076d8087bd0bb44ffc.scope - libcontainer container 8a1ec32ee254c3a3075ae294035bb6f9f402c108c089f5076d8087bd0bb44ffc. Jul 15 23:14:44.813031 containerd[1886]: time="2025-07-15T23:14:44.812933997Z" level=info msg="connecting to shim 1bbf7d3ab1e1ab316ac667d1e371df06eff58ea731af2d860fcb36837409c52a" address="unix:///run/containerd/s/990a23cb2a729266bf5bb46191af27103063324f96ceb25e964117b73664f549" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:14:44.825610 systemd[1]: Started cri-containerd-d4d9d33e41e6f6eb89dbdb45de6022ac942fdd4cecbc83952ea7566510ceb29f.scope - libcontainer container d4d9d33e41e6f6eb89dbdb45de6022ac942fdd4cecbc83952ea7566510ceb29f. Jul 15 23:14:44.837749 systemd[1]: Started cri-containerd-1bbf7d3ab1e1ab316ac667d1e371df06eff58ea731af2d860fcb36837409c52a.scope - libcontainer container 1bbf7d3ab1e1ab316ac667d1e371df06eff58ea731af2d860fcb36837409c52a. Jul 15 23:14:44.880865 containerd[1886]: time="2025-07-15T23:14:44.880804177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.0.1-n-7d7ad51cdd,Uid:6b88a4eea81748905cff15c959ce82cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a1ec32ee254c3a3075ae294035bb6f9f402c108c089f5076d8087bd0bb44ffc\"" Jul 15 23:14:44.888994 containerd[1886]: time="2025-07-15T23:14:44.888897007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.0.1-n-7d7ad51cdd,Uid:40a4bf750e575ac9659ad35c61d9b505,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4d9d33e41e6f6eb89dbdb45de6022ac942fdd4cecbc83952ea7566510ceb29f\"" Jul 15 23:14:44.892317 containerd[1886]: time="2025-07-15T23:14:44.891758459Z" level=info msg="CreateContainer within sandbox \"8a1ec32ee254c3a3075ae294035bb6f9f402c108c089f5076d8087bd0bb44ffc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 23:14:44.893392 containerd[1886]: time="2025-07-15T23:14:44.893358981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.0.1-n-7d7ad51cdd,Uid:514057d7ef70280bd7e47d3fb5ef64a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"1bbf7d3ab1e1ab316ac667d1e371df06eff58ea731af2d860fcb36837409c52a\"" Jul 15 23:14:44.895535 containerd[1886]: time="2025-07-15T23:14:44.895414764Z" level=info msg="CreateContainer within sandbox \"d4d9d33e41e6f6eb89dbdb45de6022ac942fdd4cecbc83952ea7566510ceb29f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 23:14:44.901542 containerd[1886]: time="2025-07-15T23:14:44.901513621Z" level=info msg="CreateContainer within sandbox \"1bbf7d3ab1e1ab316ac667d1e371df06eff58ea731af2d860fcb36837409c52a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 23:14:44.937702 containerd[1886]: time="2025-07-15T23:14:44.937656826Z" level=info msg="Container 9cc090a4b6d96f2c5023462bf2d4a085d62caef85ff538ea84b681507ecc10ef: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:14:44.947453 containerd[1886]: time="2025-07-15T23:14:44.947086131Z" level=info msg="Container 9fa43004fbe5707ed6410f368de910cced0498d39fbd9c8318d3566bf5b07abe: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:14:44.957031 containerd[1886]: time="2025-07-15T23:14:44.956942136Z" level=info msg="Container 0cc89fe10464524923c1c71738947f2ff487c9c6272fd5eda14f6b343e41a6be: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:14:44.984447 containerd[1886]: time="2025-07-15T23:14:44.984359430Z" level=info msg="CreateContainer within sandbox \"d4d9d33e41e6f6eb89dbdb45de6022ac942fdd4cecbc83952ea7566510ceb29f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9cc090a4b6d96f2c5023462bf2d4a085d62caef85ff538ea84b681507ecc10ef\"" Jul 15 23:14:44.985182 containerd[1886]: time="2025-07-15T23:14:44.985154083Z" level=info msg="StartContainer for \"9cc090a4b6d96f2c5023462bf2d4a085d62caef85ff538ea84b681507ecc10ef\"" Jul 15 23:14:44.986209 containerd[1886]: time="2025-07-15T23:14:44.986179878Z" level=info msg="connecting to shim 9cc090a4b6d96f2c5023462bf2d4a085d62caef85ff538ea84b681507ecc10ef" address="unix:///run/containerd/s/b8cd5ccc8519c55e314c51a6db71f9ab7eeaee00e9eeda58f9391ce1fb70b32a" protocol=ttrpc version=3 Jul 15 23:14:44.987833 kubelet[2926]: I0715 23:14:44.987808 2926 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:44.988383 kubelet[2926]: E0715 23:14:44.988355 2926 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.18:6443/api/v1/nodes\": dial tcp 10.200.20.18:6443: connect: connection refused" node="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:44.991525 containerd[1886]: time="2025-07-15T23:14:44.991476906Z" level=info msg="CreateContainer within sandbox \"1bbf7d3ab1e1ab316ac667d1e371df06eff58ea731af2d860fcb36837409c52a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0cc89fe10464524923c1c71738947f2ff487c9c6272fd5eda14f6b343e41a6be\"" Jul 15 23:14:44.992306 containerd[1886]: time="2025-07-15T23:14:44.992098530Z" level=info msg="StartContainer for \"0cc89fe10464524923c1c71738947f2ff487c9c6272fd5eda14f6b343e41a6be\"" Jul 15 23:14:44.993305 containerd[1886]: time="2025-07-15T23:14:44.993211104Z" level=info msg="connecting to shim 0cc89fe10464524923c1c71738947f2ff487c9c6272fd5eda14f6b343e41a6be" address="unix:///run/containerd/s/990a23cb2a729266bf5bb46191af27103063324f96ceb25e964117b73664f549" protocol=ttrpc version=3 Jul 15 23:14:45.004407 containerd[1886]: time="2025-07-15T23:14:45.004267684Z" level=info msg="CreateContainer within sandbox \"8a1ec32ee254c3a3075ae294035bb6f9f402c108c089f5076d8087bd0bb44ffc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9fa43004fbe5707ed6410f368de910cced0498d39fbd9c8318d3566bf5b07abe\"" Jul 15 23:14:45.004657 systemd[1]: Started cri-containerd-9cc090a4b6d96f2c5023462bf2d4a085d62caef85ff538ea84b681507ecc10ef.scope - libcontainer container 9cc090a4b6d96f2c5023462bf2d4a085d62caef85ff538ea84b681507ecc10ef. Jul 15 23:14:45.005599 containerd[1886]: time="2025-07-15T23:14:45.005573319Z" level=info msg="StartContainer for \"9fa43004fbe5707ed6410f368de910cced0498d39fbd9c8318d3566bf5b07abe\"" Jul 15 23:14:45.007977 containerd[1886]: time="2025-07-15T23:14:45.007914005Z" level=info msg="connecting to shim 9fa43004fbe5707ed6410f368de910cced0498d39fbd9c8318d3566bf5b07abe" address="unix:///run/containerd/s/a81ebd6957f906b629ff93d6d9fac902771f9feaa7ad70c0b1a893d939d53607" protocol=ttrpc version=3 Jul 15 23:14:45.017431 systemd[1]: Started cri-containerd-0cc89fe10464524923c1c71738947f2ff487c9c6272fd5eda14f6b343e41a6be.scope - libcontainer container 0cc89fe10464524923c1c71738947f2ff487c9c6272fd5eda14f6b343e41a6be. Jul 15 23:14:45.030420 systemd[1]: Started cri-containerd-9fa43004fbe5707ed6410f368de910cced0498d39fbd9c8318d3566bf5b07abe.scope - libcontainer container 9fa43004fbe5707ed6410f368de910cced0498d39fbd9c8318d3566bf5b07abe. Jul 15 23:14:45.064997 containerd[1886]: time="2025-07-15T23:14:45.064949106Z" level=info msg="StartContainer for \"9cc090a4b6d96f2c5023462bf2d4a085d62caef85ff538ea84b681507ecc10ef\" returns successfully" Jul 15 23:14:45.080387 containerd[1886]: time="2025-07-15T23:14:45.079783859Z" level=info msg="StartContainer for \"0cc89fe10464524923c1c71738947f2ff487c9c6272fd5eda14f6b343e41a6be\" returns successfully" Jul 15 23:14:45.098096 containerd[1886]: time="2025-07-15T23:14:45.098051758Z" level=info msg="StartContainer for \"9fa43004fbe5707ed6410f368de910cced0498d39fbd9c8318d3566bf5b07abe\" returns successfully" Jul 15 23:14:45.242887 kubelet[2926]: E0715 23:14:45.242734 2926 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-n-7d7ad51cdd\" not found" node="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:45.245760 kubelet[2926]: E0715 23:14:45.245727 2926 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-n-7d7ad51cdd\" not found" node="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:45.250622 kubelet[2926]: E0715 23:14:45.249168 2926 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-n-7d7ad51cdd\" not found" node="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:45.790863 kubelet[2926]: I0715 23:14:45.790578 2926 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:46.253748 kubelet[2926]: E0715 23:14:46.253669 2926 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-n-7d7ad51cdd\" not found" node="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:46.254433 kubelet[2926]: E0715 23:14:46.254322 2926 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-n-7d7ad51cdd\" not found" node="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:46.414638 kubelet[2926]: E0715 23:14:46.414580 2926 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4372.0.1-n-7d7ad51cdd\" not found" node="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:46.511889 kubelet[2926]: I0715 23:14:46.511774 2926 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:46.512193 kubelet[2926]: E0715 23:14:46.512023 2926 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4372.0.1-n-7d7ad51cdd\": node \"ci-4372.0.1-n-7d7ad51cdd\" not found" Jul 15 23:14:46.577172 kubelet[2926]: I0715 23:14:46.576118 2926 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:46.593280 kubelet[2926]: E0715 23:14:46.593230 2926 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.0.1-n-7d7ad51cdd\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:46.593280 kubelet[2926]: I0715 23:14:46.593265 2926 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:46.596727 kubelet[2926]: E0715 23:14:46.596610 2926 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372.0.1-n-7d7ad51cdd\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:46.596727 kubelet[2926]: I0715 23:14:46.596638 2926 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:46.599535 kubelet[2926]: E0715 23:14:46.599509 2926 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.0.1-n-7d7ad51cdd\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:46.733692 kubelet[2926]: I0715 23:14:46.733128 2926 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:46.740365 kubelet[2926]: E0715 23:14:46.740272 2926 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372.0.1-n-7d7ad51cdd\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:47.146199 kubelet[2926]: I0715 23:14:47.145942 2926 apiserver.go:52] "Watching apiserver" Jul 15 23:14:47.175177 kubelet[2926]: I0715 23:14:47.175146 2926 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 23:14:47.252701 kubelet[2926]: I0715 23:14:47.252635 2926 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:47.253191 kubelet[2926]: I0715 23:14:47.253070 2926 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:47.260826 kubelet[2926]: I0715 23:14:47.260482 2926 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 15 23:14:47.264537 kubelet[2926]: I0715 23:14:47.264507 2926 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 15 23:14:48.476244 systemd[1]: Reload requested from client PID 3373 ('systemctl') (unit session-9.scope)... Jul 15 23:14:48.476263 systemd[1]: Reloading... Jul 15 23:14:48.555432 zram_generator::config[3419]: No configuration found. Jul 15 23:14:48.636611 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:14:48.731515 systemd[1]: Reloading finished in 254 ms. Jul 15 23:14:48.752942 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:14:48.765678 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 23:14:48.766054 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:14:48.766184 systemd[1]: kubelet.service: Consumed 1.304s CPU time, 124.9M memory peak. Jul 15 23:14:48.768530 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:14:48.864102 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:14:48.869639 (kubelet)[3483]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 23:14:48.898739 kubelet[3483]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:14:48.898739 kubelet[3483]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 23:14:48.898739 kubelet[3483]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:14:48.899253 kubelet[3483]: I0715 23:14:48.898774 3483 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 23:14:48.903371 kubelet[3483]: I0715 23:14:48.903337 3483 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 15 23:14:48.903371 kubelet[3483]: I0715 23:14:48.903365 3483 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 23:14:48.903565 kubelet[3483]: I0715 23:14:48.903548 3483 server.go:956] "Client rotation is on, will bootstrap in background" Jul 15 23:14:48.904519 kubelet[3483]: I0715 23:14:48.904497 3483 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 15 23:14:48.907580 kubelet[3483]: I0715 23:14:48.907412 3483 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 23:14:48.910566 kubelet[3483]: I0715 23:14:48.910550 3483 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 23:14:48.913240 kubelet[3483]: I0715 23:14:48.913209 3483 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 23:14:48.913437 kubelet[3483]: I0715 23:14:48.913412 3483 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 23:14:48.913556 kubelet[3483]: I0715 23:14:48.913436 3483 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.0.1-n-7d7ad51cdd","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 23:14:48.913638 kubelet[3483]: I0715 23:14:48.913559 3483 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 23:14:48.913638 kubelet[3483]: I0715 23:14:48.913566 3483 container_manager_linux.go:303] "Creating device plugin manager" Jul 15 23:14:48.913638 kubelet[3483]: I0715 23:14:48.913601 3483 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:14:48.913732 kubelet[3483]: I0715 23:14:48.913720 3483 kubelet.go:480] "Attempting to sync node with API server" Jul 15 23:14:48.913756 kubelet[3483]: I0715 23:14:48.913734 3483 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 23:14:48.914097 kubelet[3483]: I0715 23:14:48.913763 3483 kubelet.go:386] "Adding apiserver pod source" Jul 15 23:14:48.914128 kubelet[3483]: I0715 23:14:48.914114 3483 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 23:14:48.916532 kubelet[3483]: I0715 23:14:48.916499 3483 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 15 23:14:48.916871 kubelet[3483]: I0715 23:14:48.916855 3483 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 15 23:14:48.920713 kubelet[3483]: I0715 23:14:48.920692 3483 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 23:14:48.920782 kubelet[3483]: I0715 23:14:48.920730 3483 server.go:1289] "Started kubelet" Jul 15 23:14:48.924395 kubelet[3483]: I0715 23:14:48.924354 3483 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 23:14:48.928592 kubelet[3483]: I0715 23:14:48.928548 3483 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 23:14:48.930464 kubelet[3483]: I0715 23:14:48.929755 3483 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 23:14:48.930464 kubelet[3483]: E0715 23:14:48.929997 3483 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.1-n-7d7ad51cdd\" not found" Jul 15 23:14:48.930464 kubelet[3483]: I0715 23:14:48.930267 3483 server.go:317] "Adding debug handlers to kubelet server" Jul 15 23:14:48.931256 kubelet[3483]: I0715 23:14:48.931000 3483 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 23:14:48.931841 kubelet[3483]: I0715 23:14:48.931672 3483 reconciler.go:26] "Reconciler: start to sync state" Jul 15 23:14:48.935379 kubelet[3483]: I0715 23:14:48.934986 3483 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 23:14:48.935379 kubelet[3483]: I0715 23:14:48.935203 3483 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 23:14:48.935489 kubelet[3483]: I0715 23:14:48.935436 3483 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 23:14:48.936354 kubelet[3483]: I0715 23:14:48.936337 3483 factory.go:223] Registration of the systemd container factory successfully Jul 15 23:14:48.937053 kubelet[3483]: I0715 23:14:48.937025 3483 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 23:14:48.941851 kubelet[3483]: E0715 23:14:48.941826 3483 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 23:14:48.945215 kubelet[3483]: I0715 23:14:48.945182 3483 factory.go:223] Registration of the containerd container factory successfully Jul 15 23:14:48.948470 kubelet[3483]: I0715 23:14:48.948259 3483 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 15 23:14:48.951628 kubelet[3483]: I0715 23:14:48.951602 3483 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 15 23:14:48.951765 kubelet[3483]: I0715 23:14:48.951754 3483 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 15 23:14:48.951830 kubelet[3483]: I0715 23:14:48.951821 3483 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 23:14:48.951875 kubelet[3483]: I0715 23:14:48.951866 3483 kubelet.go:2436] "Starting kubelet main sync loop" Jul 15 23:14:48.951958 kubelet[3483]: E0715 23:14:48.951941 3483 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 23:14:49.016008 kubelet[3483]: I0715 23:14:49.015188 3483 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 23:14:49.017131 kubelet[3483]: I0715 23:14:49.016303 3483 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 23:14:49.017131 kubelet[3483]: I0715 23:14:49.016335 3483 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:14:49.017131 kubelet[3483]: I0715 23:14:49.016467 3483 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 23:14:49.017131 kubelet[3483]: I0715 23:14:49.016476 3483 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 23:14:49.017131 kubelet[3483]: I0715 23:14:49.016491 3483 policy_none.go:49] "None policy: Start" Jul 15 23:14:49.017131 kubelet[3483]: I0715 23:14:49.016500 3483 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 23:14:49.017131 kubelet[3483]: I0715 23:14:49.016509 3483 state_mem.go:35] "Initializing new in-memory state store" Jul 15 23:14:49.017131 kubelet[3483]: I0715 23:14:49.016569 3483 state_mem.go:75] "Updated machine memory state" Jul 15 23:14:49.020444 kubelet[3483]: E0715 23:14:49.020424 3483 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 15 23:14:49.020704 kubelet[3483]: I0715 23:14:49.020689 3483 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 23:14:49.020805 kubelet[3483]: I0715 23:14:49.020775 3483 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 23:14:49.023504 kubelet[3483]: I0715 23:14:49.023486 3483 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 23:14:49.026054 kubelet[3483]: E0715 23:14:49.026023 3483 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 23:14:49.053150 kubelet[3483]: I0715 23:14:49.053113 3483 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:49.053608 kubelet[3483]: I0715 23:14:49.053585 3483 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:49.054199 kubelet[3483]: I0715 23:14:49.053744 3483 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:49.066299 kubelet[3483]: I0715 23:14:49.066179 3483 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 15 23:14:49.066677 kubelet[3483]: I0715 23:14:49.066559 3483 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 15 23:14:49.066764 kubelet[3483]: E0715 23:14:49.066572 3483 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.0.1-n-7d7ad51cdd\" already exists" pod="kube-system/kube-apiserver-ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:49.067597 kubelet[3483]: I0715 23:14:49.067553 3483 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 15 23:14:49.067699 kubelet[3483]: E0715 23:14:49.067603 3483 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.0.1-n-7d7ad51cdd\" already exists" pod="kube-system/kube-scheduler-ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:49.124096 kubelet[3483]: I0715 23:14:49.123795 3483 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:49.132706 kubelet[3483]: I0715 23:14:49.132656 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6b88a4eea81748905cff15c959ce82cf-ca-certs\") pod \"kube-apiserver-ci-4372.0.1-n-7d7ad51cdd\" (UID: \"6b88a4eea81748905cff15c959ce82cf\") " pod="kube-system/kube-apiserver-ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:49.132706 kubelet[3483]: I0715 23:14:49.132700 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/40a4bf750e575ac9659ad35c61d9b505-ca-certs\") pod \"kube-controller-manager-ci-4372.0.1-n-7d7ad51cdd\" (UID: \"40a4bf750e575ac9659ad35c61d9b505\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:49.132706 kubelet[3483]: I0715 23:14:49.132715 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/40a4bf750e575ac9659ad35c61d9b505-k8s-certs\") pod \"kube-controller-manager-ci-4372.0.1-n-7d7ad51cdd\" (UID: \"40a4bf750e575ac9659ad35c61d9b505\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:49.132878 kubelet[3483]: I0715 23:14:49.132726 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/40a4bf750e575ac9659ad35c61d9b505-kubeconfig\") pod \"kube-controller-manager-ci-4372.0.1-n-7d7ad51cdd\" (UID: \"40a4bf750e575ac9659ad35c61d9b505\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:49.132878 kubelet[3483]: I0715 23:14:49.132738 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/40a4bf750e575ac9659ad35c61d9b505-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.0.1-n-7d7ad51cdd\" (UID: \"40a4bf750e575ac9659ad35c61d9b505\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:49.132878 kubelet[3483]: I0715 23:14:49.132756 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6b88a4eea81748905cff15c959ce82cf-k8s-certs\") pod \"kube-apiserver-ci-4372.0.1-n-7d7ad51cdd\" (UID: \"6b88a4eea81748905cff15c959ce82cf\") " pod="kube-system/kube-apiserver-ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:49.132878 kubelet[3483]: I0715 23:14:49.132766 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6b88a4eea81748905cff15c959ce82cf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.0.1-n-7d7ad51cdd\" (UID: \"6b88a4eea81748905cff15c959ce82cf\") " pod="kube-system/kube-apiserver-ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:49.132878 kubelet[3483]: I0715 23:14:49.132775 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/40a4bf750e575ac9659ad35c61d9b505-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.0.1-n-7d7ad51cdd\" (UID: \"40a4bf750e575ac9659ad35c61d9b505\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:49.133103 kubelet[3483]: I0715 23:14:49.132784 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/514057d7ef70280bd7e47d3fb5ef64a4-kubeconfig\") pod \"kube-scheduler-ci-4372.0.1-n-7d7ad51cdd\" (UID: \"514057d7ef70280bd7e47d3fb5ef64a4\") " pod="kube-system/kube-scheduler-ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:49.136109 kubelet[3483]: I0715 23:14:49.136068 3483 kubelet_node_status.go:124] "Node was previously registered" node="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:49.136194 kubelet[3483]: I0715 23:14:49.136173 3483 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:49.915334 kubelet[3483]: I0715 23:14:49.915291 3483 apiserver.go:52] "Watching apiserver" Jul 15 23:14:49.932671 kubelet[3483]: I0715 23:14:49.932628 3483 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 23:14:49.991270 kubelet[3483]: I0715 23:14:49.991215 3483 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:49.991517 kubelet[3483]: I0715 23:14:49.991496 3483 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:50.015062 kubelet[3483]: I0715 23:14:50.015030 3483 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 15 23:14:50.015290 kubelet[3483]: E0715 23:14:50.015258 3483 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.0.1-n-7d7ad51cdd\" already exists" pod="kube-system/kube-scheduler-ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:50.025819 kubelet[3483]: I0715 23:14:50.024257 3483 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 15 23:14:50.025819 kubelet[3483]: E0715 23:14:50.024332 3483 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.0.1-n-7d7ad51cdd\" already exists" pod="kube-system/kube-apiserver-ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:14:50.026014 kubelet[3483]: I0715 23:14:50.025963 3483 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372.0.1-n-7d7ad51cdd" podStartSLOduration=3.025953329 podStartE2EDuration="3.025953329s" podCreationTimestamp="2025-07-15 23:14:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:14:50.024875449 +0000 UTC m=+1.152404509" watchObservedRunningTime="2025-07-15 23:14:50.025953329 +0000 UTC m=+1.153482389" Jul 15 23:14:50.055697 kubelet[3483]: I0715 23:14:50.055637 3483 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372.0.1-n-7d7ad51cdd" podStartSLOduration=3.055618379 podStartE2EDuration="3.055618379s" podCreationTimestamp="2025-07-15 23:14:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:14:50.039555153 +0000 UTC m=+1.167084213" watchObservedRunningTime="2025-07-15 23:14:50.055618379 +0000 UTC m=+1.183147439" Jul 15 23:14:50.080146 kubelet[3483]: I0715 23:14:50.079622 3483 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372.0.1-n-7d7ad51cdd" podStartSLOduration=1.07960537 podStartE2EDuration="1.07960537s" podCreationTimestamp="2025-07-15 23:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:14:50.056525158 +0000 UTC m=+1.184054242" watchObservedRunningTime="2025-07-15 23:14:50.07960537 +0000 UTC m=+1.207134430" Jul 15 23:14:55.820363 kubelet[3483]: I0715 23:14:55.820322 3483 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 23:14:55.821088 containerd[1886]: time="2025-07-15T23:14:55.821001655Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 23:14:55.821359 kubelet[3483]: I0715 23:14:55.821207 3483 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 23:14:56.730910 systemd[1]: Created slice kubepods-besteffort-pod6ee2da97_e879_4c36_b5a6_de8be0e60952.slice - libcontainer container kubepods-besteffort-pod6ee2da97_e879_4c36_b5a6_de8be0e60952.slice. Jul 15 23:14:56.777771 kubelet[3483]: I0715 23:14:56.777726 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6ee2da97-e879-4c36-b5a6-de8be0e60952-xtables-lock\") pod \"kube-proxy-fq8cw\" (UID: \"6ee2da97-e879-4c36-b5a6-de8be0e60952\") " pod="kube-system/kube-proxy-fq8cw" Jul 15 23:14:56.777771 kubelet[3483]: I0715 23:14:56.777769 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6ee2da97-e879-4c36-b5a6-de8be0e60952-kube-proxy\") pod \"kube-proxy-fq8cw\" (UID: \"6ee2da97-e879-4c36-b5a6-de8be0e60952\") " pod="kube-system/kube-proxy-fq8cw" Jul 15 23:14:56.777771 kubelet[3483]: I0715 23:14:56.777782 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ee2da97-e879-4c36-b5a6-de8be0e60952-lib-modules\") pod \"kube-proxy-fq8cw\" (UID: \"6ee2da97-e879-4c36-b5a6-de8be0e60952\") " pod="kube-system/kube-proxy-fq8cw" Jul 15 23:14:56.777957 kubelet[3483]: I0715 23:14:56.777792 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw87w\" (UniqueName: \"kubernetes.io/projected/6ee2da97-e879-4c36-b5a6-de8be0e60952-kube-api-access-tw87w\") pod \"kube-proxy-fq8cw\" (UID: \"6ee2da97-e879-4c36-b5a6-de8be0e60952\") " pod="kube-system/kube-proxy-fq8cw" Jul 15 23:14:57.039823 containerd[1886]: time="2025-07-15T23:14:57.039675341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fq8cw,Uid:6ee2da97-e879-4c36-b5a6-de8be0e60952,Namespace:kube-system,Attempt:0,}" Jul 15 23:14:57.074648 systemd[1]: Created slice kubepods-besteffort-podf69681c5_1595_4e33_bed1_77da7db38f6d.slice - libcontainer container kubepods-besteffort-podf69681c5_1595_4e33_bed1_77da7db38f6d.slice. Jul 15 23:14:57.080310 kubelet[3483]: I0715 23:14:57.079462 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f69681c5-1595-4e33-bed1-77da7db38f6d-var-lib-calico\") pod \"tigera-operator-747864d56d-96gdx\" (UID: \"f69681c5-1595-4e33-bed1-77da7db38f6d\") " pod="tigera-operator/tigera-operator-747864d56d-96gdx" Jul 15 23:14:57.080703 kubelet[3483]: I0715 23:14:57.080641 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f97pk\" (UniqueName: \"kubernetes.io/projected/f69681c5-1595-4e33-bed1-77da7db38f6d-kube-api-access-f97pk\") pod \"tigera-operator-747864d56d-96gdx\" (UID: \"f69681c5-1595-4e33-bed1-77da7db38f6d\") " pod="tigera-operator/tigera-operator-747864d56d-96gdx" Jul 15 23:14:57.100898 containerd[1886]: time="2025-07-15T23:14:57.100483624Z" level=info msg="connecting to shim 2b26d6140177494bc4d33d908f7605c5fde8adc02ab4ad4f45ecca36ca372017" address="unix:///run/containerd/s/584a60d04b5b971aec006f4eeb159f75f4212b71112465cba6cce5b7bca6dd01" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:14:57.119452 systemd[1]: Started cri-containerd-2b26d6140177494bc4d33d908f7605c5fde8adc02ab4ad4f45ecca36ca372017.scope - libcontainer container 2b26d6140177494bc4d33d908f7605c5fde8adc02ab4ad4f45ecca36ca372017. Jul 15 23:14:57.141838 containerd[1886]: time="2025-07-15T23:14:57.141794136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fq8cw,Uid:6ee2da97-e879-4c36-b5a6-de8be0e60952,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b26d6140177494bc4d33d908f7605c5fde8adc02ab4ad4f45ecca36ca372017\"" Jul 15 23:14:57.151825 containerd[1886]: time="2025-07-15T23:14:57.151783440Z" level=info msg="CreateContainer within sandbox \"2b26d6140177494bc4d33d908f7605c5fde8adc02ab4ad4f45ecca36ca372017\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 23:14:57.176317 containerd[1886]: time="2025-07-15T23:14:57.176063957Z" level=info msg="Container 1ee724b88474e6de93c4355e3835639f5e5c2972078d950f27583650f812d285: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:14:57.192929 containerd[1886]: time="2025-07-15T23:14:57.192881546Z" level=info msg="CreateContainer within sandbox \"2b26d6140177494bc4d33d908f7605c5fde8adc02ab4ad4f45ecca36ca372017\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1ee724b88474e6de93c4355e3835639f5e5c2972078d950f27583650f812d285\"" Jul 15 23:14:57.194190 containerd[1886]: time="2025-07-15T23:14:57.194137719Z" level=info msg="StartContainer for \"1ee724b88474e6de93c4355e3835639f5e5c2972078d950f27583650f812d285\"" Jul 15 23:14:57.195911 containerd[1886]: time="2025-07-15T23:14:57.195734325Z" level=info msg="connecting to shim 1ee724b88474e6de93c4355e3835639f5e5c2972078d950f27583650f812d285" address="unix:///run/containerd/s/584a60d04b5b971aec006f4eeb159f75f4212b71112465cba6cce5b7bca6dd01" protocol=ttrpc version=3 Jul 15 23:14:57.213454 systemd[1]: Started cri-containerd-1ee724b88474e6de93c4355e3835639f5e5c2972078d950f27583650f812d285.scope - libcontainer container 1ee724b88474e6de93c4355e3835639f5e5c2972078d950f27583650f812d285. Jul 15 23:14:57.244304 containerd[1886]: time="2025-07-15T23:14:57.244232308Z" level=info msg="StartContainer for \"1ee724b88474e6de93c4355e3835639f5e5c2972078d950f27583650f812d285\" returns successfully" Jul 15 23:14:57.380211 containerd[1886]: time="2025-07-15T23:14:57.380109022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-96gdx,Uid:f69681c5-1595-4e33-bed1-77da7db38f6d,Namespace:tigera-operator,Attempt:0,}" Jul 15 23:14:57.433891 containerd[1886]: time="2025-07-15T23:14:57.433838701Z" level=info msg="connecting to shim 21012e6ad67a0f968e2b12223cd400cb552d57b89f98e077f245c4d06e08085b" address="unix:///run/containerd/s/0c18741d9ddcd181a741d61723f135bcb5e3c6deb0ffd712b0312a2171a469fc" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:14:57.453419 systemd[1]: Started cri-containerd-21012e6ad67a0f968e2b12223cd400cb552d57b89f98e077f245c4d06e08085b.scope - libcontainer container 21012e6ad67a0f968e2b12223cd400cb552d57b89f98e077f245c4d06e08085b. Jul 15 23:14:57.503611 containerd[1886]: time="2025-07-15T23:14:57.503560257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-96gdx,Uid:f69681c5-1595-4e33-bed1-77da7db38f6d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"21012e6ad67a0f968e2b12223cd400cb552d57b89f98e077f245c4d06e08085b\"" Jul 15 23:14:57.506070 containerd[1886]: time="2025-07-15T23:14:57.505992639Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 15 23:14:58.018880 kubelet[3483]: I0715 23:14:58.018813 3483 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fq8cw" podStartSLOduration=2.018777511 podStartE2EDuration="2.018777511s" podCreationTimestamp="2025-07-15 23:14:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:14:58.018047858 +0000 UTC m=+9.145576918" watchObservedRunningTime="2025-07-15 23:14:58.018777511 +0000 UTC m=+9.146306571" Jul 15 23:14:58.810639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount992134156.mount: Deactivated successfully. Jul 15 23:14:59.160258 containerd[1886]: time="2025-07-15T23:14:59.160053123Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:59.162268 containerd[1886]: time="2025-07-15T23:14:59.162233914Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 15 23:14:59.165050 containerd[1886]: time="2025-07-15T23:14:59.165003658Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:59.170257 containerd[1886]: time="2025-07-15T23:14:59.170209696Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:59.170699 containerd[1886]: time="2025-07-15T23:14:59.170676038Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 1.664608596s" Jul 15 23:14:59.170699 containerd[1886]: time="2025-07-15T23:14:59.170700830Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 15 23:14:59.178880 containerd[1886]: time="2025-07-15T23:14:59.178763895Z" level=info msg="CreateContainer within sandbox \"21012e6ad67a0f968e2b12223cd400cb552d57b89f98e077f245c4d06e08085b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 15 23:14:59.202151 containerd[1886]: time="2025-07-15T23:14:59.201998646Z" level=info msg="Container 49b9602028208c017974c481b6914a68ae1a28d4b51a5052550f2cf2800e2510: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:14:59.217034 containerd[1886]: time="2025-07-15T23:14:59.216969462Z" level=info msg="CreateContainer within sandbox \"21012e6ad67a0f968e2b12223cd400cb552d57b89f98e077f245c4d06e08085b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"49b9602028208c017974c481b6914a68ae1a28d4b51a5052550f2cf2800e2510\"" Jul 15 23:14:59.218518 containerd[1886]: time="2025-07-15T23:14:59.218460833Z" level=info msg="StartContainer for \"49b9602028208c017974c481b6914a68ae1a28d4b51a5052550f2cf2800e2510\"" Jul 15 23:14:59.219504 containerd[1886]: time="2025-07-15T23:14:59.219430533Z" level=info msg="connecting to shim 49b9602028208c017974c481b6914a68ae1a28d4b51a5052550f2cf2800e2510" address="unix:///run/containerd/s/0c18741d9ddcd181a741d61723f135bcb5e3c6deb0ffd712b0312a2171a469fc" protocol=ttrpc version=3 Jul 15 23:14:59.238504 systemd[1]: Started cri-containerd-49b9602028208c017974c481b6914a68ae1a28d4b51a5052550f2cf2800e2510.scope - libcontainer container 49b9602028208c017974c481b6914a68ae1a28d4b51a5052550f2cf2800e2510. Jul 15 23:14:59.265176 containerd[1886]: time="2025-07-15T23:14:59.265132052Z" level=info msg="StartContainer for \"49b9602028208c017974c481b6914a68ae1a28d4b51a5052550f2cf2800e2510\" returns successfully" Jul 15 23:15:00.027993 kubelet[3483]: I0715 23:15:00.027827 3483 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-96gdx" podStartSLOduration=2.361102279 podStartE2EDuration="4.0278144s" podCreationTimestamp="2025-07-15 23:14:56 +0000 UTC" firstStartedPulling="2025-07-15 23:14:57.504989074 +0000 UTC m=+8.632518134" lastFinishedPulling="2025-07-15 23:14:59.171701195 +0000 UTC m=+10.299230255" observedRunningTime="2025-07-15 23:15:00.027696597 +0000 UTC m=+11.155225657" watchObservedRunningTime="2025-07-15 23:15:00.0278144 +0000 UTC m=+11.155343468" Jul 15 23:15:04.622662 sudo[2361]: pam_unix(sudo:session): session closed for user root Jul 15 23:15:04.709149 sshd[2360]: Connection closed by 10.200.16.10 port 55164 Jul 15 23:15:04.711566 sshd-session[2358]: pam_unix(sshd:session): session closed for user core Jul 15 23:15:04.716932 systemd[1]: sshd@6-10.200.20.18:22-10.200.16.10:55164.service: Deactivated successfully. Jul 15 23:15:04.717607 systemd-logind[1862]: Session 9 logged out. Waiting for processes to exit. Jul 15 23:15:04.723946 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 23:15:04.726563 systemd[1]: session-9.scope: Consumed 4.389s CPU time, 229.6M memory peak. Jul 15 23:15:04.731617 systemd-logind[1862]: Removed session 9. Jul 15 23:15:08.946729 systemd[1]: Created slice kubepods-besteffort-pod976814ed_338c_4d15_980c_0cd195598a14.slice - libcontainer container kubepods-besteffort-pod976814ed_338c_4d15_980c_0cd195598a14.slice. Jul 15 23:15:08.948932 kubelet[3483]: I0715 23:15:08.948893 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/976814ed-338c-4d15-980c-0cd195598a14-typha-certs\") pod \"calico-typha-85759bdcc7-mx6z7\" (UID: \"976814ed-338c-4d15-980c-0cd195598a14\") " pod="calico-system/calico-typha-85759bdcc7-mx6z7" Jul 15 23:15:08.949169 kubelet[3483]: I0715 23:15:08.948928 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x6md\" (UniqueName: \"kubernetes.io/projected/976814ed-338c-4d15-980c-0cd195598a14-kube-api-access-8x6md\") pod \"calico-typha-85759bdcc7-mx6z7\" (UID: \"976814ed-338c-4d15-980c-0cd195598a14\") " pod="calico-system/calico-typha-85759bdcc7-mx6z7" Jul 15 23:15:08.949169 kubelet[3483]: I0715 23:15:08.948954 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/976814ed-338c-4d15-980c-0cd195598a14-tigera-ca-bundle\") pod \"calico-typha-85759bdcc7-mx6z7\" (UID: \"976814ed-338c-4d15-980c-0cd195598a14\") " pod="calico-system/calico-typha-85759bdcc7-mx6z7" Jul 15 23:15:09.051296 kubelet[3483]: I0715 23:15:09.050363 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c33c4a1a-3601-46b7-8b2e-e189c38bbbec-cni-net-dir\") pod \"calico-node-tjlt8\" (UID: \"c33c4a1a-3601-46b7-8b2e-e189c38bbbec\") " pod="calico-system/calico-node-tjlt8" Jul 15 23:15:09.051296 kubelet[3483]: I0715 23:15:09.051210 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c33c4a1a-3601-46b7-8b2e-e189c38bbbec-var-lib-calico\") pod \"calico-node-tjlt8\" (UID: \"c33c4a1a-3601-46b7-8b2e-e189c38bbbec\") " pod="calico-system/calico-node-tjlt8" Jul 15 23:15:09.051296 kubelet[3483]: I0715 23:15:09.051231 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx79b\" (UniqueName: \"kubernetes.io/projected/c33c4a1a-3601-46b7-8b2e-e189c38bbbec-kube-api-access-tx79b\") pod \"calico-node-tjlt8\" (UID: \"c33c4a1a-3601-46b7-8b2e-e189c38bbbec\") " pod="calico-system/calico-node-tjlt8" Jul 15 23:15:09.051296 kubelet[3483]: I0715 23:15:09.051255 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c33c4a1a-3601-46b7-8b2e-e189c38bbbec-lib-modules\") pod \"calico-node-tjlt8\" (UID: \"c33c4a1a-3601-46b7-8b2e-e189c38bbbec\") " pod="calico-system/calico-node-tjlt8" Jul 15 23:15:09.052392 kubelet[3483]: I0715 23:15:09.051265 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c33c4a1a-3601-46b7-8b2e-e189c38bbbec-cni-bin-dir\") pod \"calico-node-tjlt8\" (UID: \"c33c4a1a-3601-46b7-8b2e-e189c38bbbec\") " pod="calico-system/calico-node-tjlt8" Jul 15 23:15:09.052392 kubelet[3483]: I0715 23:15:09.051673 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c33c4a1a-3601-46b7-8b2e-e189c38bbbec-flexvol-driver-host\") pod \"calico-node-tjlt8\" (UID: \"c33c4a1a-3601-46b7-8b2e-e189c38bbbec\") " pod="calico-system/calico-node-tjlt8" Jul 15 23:15:09.052392 kubelet[3483]: I0715 23:15:09.051687 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c33c4a1a-3601-46b7-8b2e-e189c38bbbec-node-certs\") pod \"calico-node-tjlt8\" (UID: \"c33c4a1a-3601-46b7-8b2e-e189c38bbbec\") " pod="calico-system/calico-node-tjlt8" Jul 15 23:15:09.052392 kubelet[3483]: I0715 23:15:09.051708 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c33c4a1a-3601-46b7-8b2e-e189c38bbbec-policysync\") pod \"calico-node-tjlt8\" (UID: \"c33c4a1a-3601-46b7-8b2e-e189c38bbbec\") " pod="calico-system/calico-node-tjlt8" Jul 15 23:15:09.052392 kubelet[3483]: I0715 23:15:09.051737 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c33c4a1a-3601-46b7-8b2e-e189c38bbbec-var-run-calico\") pod \"calico-node-tjlt8\" (UID: \"c33c4a1a-3601-46b7-8b2e-e189c38bbbec\") " pod="calico-system/calico-node-tjlt8" Jul 15 23:15:09.052535 kubelet[3483]: I0715 23:15:09.051745 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c33c4a1a-3601-46b7-8b2e-e189c38bbbec-xtables-lock\") pod \"calico-node-tjlt8\" (UID: \"c33c4a1a-3601-46b7-8b2e-e189c38bbbec\") " pod="calico-system/calico-node-tjlt8" Jul 15 23:15:09.052535 kubelet[3483]: I0715 23:15:09.051756 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c33c4a1a-3601-46b7-8b2e-e189c38bbbec-cni-log-dir\") pod \"calico-node-tjlt8\" (UID: \"c33c4a1a-3601-46b7-8b2e-e189c38bbbec\") " pod="calico-system/calico-node-tjlt8" Jul 15 23:15:09.052535 kubelet[3483]: I0715 23:15:09.051764 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c33c4a1a-3601-46b7-8b2e-e189c38bbbec-tigera-ca-bundle\") pod \"calico-node-tjlt8\" (UID: \"c33c4a1a-3601-46b7-8b2e-e189c38bbbec\") " pod="calico-system/calico-node-tjlt8" Jul 15 23:15:09.052662 systemd[1]: Created slice kubepods-besteffort-podc33c4a1a_3601_46b7_8b2e_e189c38bbbec.slice - libcontainer container kubepods-besteffort-podc33c4a1a_3601_46b7_8b2e_e189c38bbbec.slice. Jul 15 23:15:09.162164 kubelet[3483]: E0715 23:15:09.162135 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.162164 kubelet[3483]: W0715 23:15:09.162156 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.162393 kubelet[3483]: E0715 23:15:09.162178 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.202288 kubelet[3483]: E0715 23:15:09.202156 3483 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dhhj5" podUID="ebad4995-7e5b-4942-987a-9ae9ac290621" Jul 15 23:15:09.219931 kubelet[3483]: E0715 23:15:09.219670 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.219931 kubelet[3483]: W0715 23:15:09.219692 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.219931 kubelet[3483]: E0715 23:15:09.219709 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.245553 kubelet[3483]: E0715 23:15:09.245514 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.245553 kubelet[3483]: W0715 23:15:09.245542 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.245553 kubelet[3483]: E0715 23:15:09.245563 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.246162 kubelet[3483]: E0715 23:15:09.245931 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.246162 kubelet[3483]: W0715 23:15:09.245943 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.246162 kubelet[3483]: E0715 23:15:09.245983 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.246162 kubelet[3483]: E0715 23:15:09.246120 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.246162 kubelet[3483]: W0715 23:15:09.246126 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.246162 kubelet[3483]: E0715 23:15:09.246133 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.246861 kubelet[3483]: E0715 23:15:09.246242 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.246861 kubelet[3483]: W0715 23:15:09.246247 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.246861 kubelet[3483]: E0715 23:15:09.246252 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.247175 kubelet[3483]: E0715 23:15:09.247153 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.247175 kubelet[3483]: W0715 23:15:09.247171 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.247241 kubelet[3483]: E0715 23:15:09.247182 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.247561 kubelet[3483]: E0715 23:15:09.247541 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.247561 kubelet[3483]: W0715 23:15:09.247557 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.247561 kubelet[3483]: E0715 23:15:09.247567 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.247714 kubelet[3483]: E0715 23:15:09.247693 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.247714 kubelet[3483]: W0715 23:15:09.247703 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.247714 kubelet[3483]: E0715 23:15:09.247710 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.247885 kubelet[3483]: E0715 23:15:09.247809 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.247885 kubelet[3483]: W0715 23:15:09.247814 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.247885 kubelet[3483]: E0715 23:15:09.247820 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.248685 kubelet[3483]: E0715 23:15:09.247917 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.248685 kubelet[3483]: W0715 23:15:09.247924 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.248685 kubelet[3483]: E0715 23:15:09.247929 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.248685 kubelet[3483]: E0715 23:15:09.248006 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.248685 kubelet[3483]: W0715 23:15:09.248011 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.248685 kubelet[3483]: E0715 23:15:09.248016 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.248685 kubelet[3483]: E0715 23:15:09.248091 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.248685 kubelet[3483]: W0715 23:15:09.248095 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.248685 kubelet[3483]: E0715 23:15:09.248102 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.248685 kubelet[3483]: E0715 23:15:09.248177 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.248831 kubelet[3483]: W0715 23:15:09.248182 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.248831 kubelet[3483]: E0715 23:15:09.248187 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.248831 kubelet[3483]: E0715 23:15:09.248270 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.248831 kubelet[3483]: W0715 23:15:09.248291 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.248831 kubelet[3483]: E0715 23:15:09.248296 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.248831 kubelet[3483]: E0715 23:15:09.248384 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.248831 kubelet[3483]: W0715 23:15:09.248388 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.248831 kubelet[3483]: E0715 23:15:09.248392 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.248831 kubelet[3483]: E0715 23:15:09.248465 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.248831 kubelet[3483]: W0715 23:15:09.248470 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.248969 kubelet[3483]: E0715 23:15:09.248475 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.249172 kubelet[3483]: E0715 23:15:09.249156 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.249172 kubelet[3483]: W0715 23:15:09.249167 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.249172 kubelet[3483]: E0715 23:15:09.249175 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.249343 kubelet[3483]: E0715 23:15:09.249331 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.249343 kubelet[3483]: W0715 23:15:09.249340 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.249421 kubelet[3483]: E0715 23:15:09.249347 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.249531 kubelet[3483]: E0715 23:15:09.249518 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.249531 kubelet[3483]: W0715 23:15:09.249529 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.249585 kubelet[3483]: E0715 23:15:09.249537 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.250304 kubelet[3483]: E0715 23:15:09.250269 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.250304 kubelet[3483]: W0715 23:15:09.250299 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.250304 kubelet[3483]: E0715 23:15:09.250307 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.250501 kubelet[3483]: E0715 23:15:09.250405 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.250501 kubelet[3483]: W0715 23:15:09.250418 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.250501 kubelet[3483]: E0715 23:15:09.250424 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.251106 containerd[1886]: time="2025-07-15T23:15:09.251066511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-85759bdcc7-mx6z7,Uid:976814ed-338c-4d15-980c-0cd195598a14,Namespace:calico-system,Attempt:0,}" Jul 15 23:15:09.256428 kubelet[3483]: E0715 23:15:09.256216 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.256428 kubelet[3483]: W0715 23:15:09.256240 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.256428 kubelet[3483]: E0715 23:15:09.256255 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.256428 kubelet[3483]: I0715 23:15:09.256301 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ebad4995-7e5b-4942-987a-9ae9ac290621-socket-dir\") pod \"csi-node-driver-dhhj5\" (UID: \"ebad4995-7e5b-4942-987a-9ae9ac290621\") " pod="calico-system/csi-node-driver-dhhj5" Jul 15 23:15:09.256744 kubelet[3483]: E0715 23:15:09.256666 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.256744 kubelet[3483]: W0715 23:15:09.256682 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.256744 kubelet[3483]: E0715 23:15:09.256693 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.256744 kubelet[3483]: I0715 23:15:09.256713 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ebad4995-7e5b-4942-987a-9ae9ac290621-varrun\") pod \"csi-node-driver-dhhj5\" (UID: \"ebad4995-7e5b-4942-987a-9ae9ac290621\") " pod="calico-system/csi-node-driver-dhhj5" Jul 15 23:15:09.256903 kubelet[3483]: E0715 23:15:09.256870 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.256903 kubelet[3483]: W0715 23:15:09.256887 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.256903 kubelet[3483]: E0715 23:15:09.256897 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.257041 kubelet[3483]: E0715 23:15:09.257003 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.257041 kubelet[3483]: W0715 23:15:09.257008 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.257041 kubelet[3483]: E0715 23:15:09.257014 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.257989 kubelet[3483]: E0715 23:15:09.257967 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.257989 kubelet[3483]: W0715 23:15:09.257986 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.258060 kubelet[3483]: E0715 23:15:09.257997 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.258060 kubelet[3483]: I0715 23:15:09.258016 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ebad4995-7e5b-4942-987a-9ae9ac290621-kubelet-dir\") pod \"csi-node-driver-dhhj5\" (UID: \"ebad4995-7e5b-4942-987a-9ae9ac290621\") " pod="calico-system/csi-node-driver-dhhj5" Jul 15 23:15:09.258208 kubelet[3483]: E0715 23:15:09.258190 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.258208 kubelet[3483]: W0715 23:15:09.258203 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.258255 kubelet[3483]: E0715 23:15:09.258210 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.258328 kubelet[3483]: I0715 23:15:09.258306 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw75k\" (UniqueName: \"kubernetes.io/projected/ebad4995-7e5b-4942-987a-9ae9ac290621-kube-api-access-gw75k\") pod \"csi-node-driver-dhhj5\" (UID: \"ebad4995-7e5b-4942-987a-9ae9ac290621\") " pod="calico-system/csi-node-driver-dhhj5" Jul 15 23:15:09.258469 kubelet[3483]: E0715 23:15:09.258453 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.258469 kubelet[3483]: W0715 23:15:09.258464 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.258512 kubelet[3483]: E0715 23:15:09.258472 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.258640 kubelet[3483]: E0715 23:15:09.258624 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.258640 kubelet[3483]: W0715 23:15:09.258633 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.258640 kubelet[3483]: E0715 23:15:09.258639 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.258778 kubelet[3483]: E0715 23:15:09.258764 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.258778 kubelet[3483]: W0715 23:15:09.258773 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.258952 kubelet[3483]: E0715 23:15:09.258780 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.258952 kubelet[3483]: I0715 23:15:09.258795 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ebad4995-7e5b-4942-987a-9ae9ac290621-registration-dir\") pod \"csi-node-driver-dhhj5\" (UID: \"ebad4995-7e5b-4942-987a-9ae9ac290621\") " pod="calico-system/csi-node-driver-dhhj5" Jul 15 23:15:09.259240 kubelet[3483]: E0715 23:15:09.259223 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.259423 kubelet[3483]: W0715 23:15:09.259319 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.259423 kubelet[3483]: E0715 23:15:09.259337 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.259549 kubelet[3483]: E0715 23:15:09.259537 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.259592 kubelet[3483]: W0715 23:15:09.259583 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.259633 kubelet[3483]: E0715 23:15:09.259624 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.259859 kubelet[3483]: E0715 23:15:09.259838 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.259859 kubelet[3483]: W0715 23:15:09.259857 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.259925 kubelet[3483]: E0715 23:15:09.259866 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.259999 kubelet[3483]: E0715 23:15:09.259987 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.259999 kubelet[3483]: W0715 23:15:09.259996 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.260043 kubelet[3483]: E0715 23:15:09.260003 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.260163 kubelet[3483]: E0715 23:15:09.260147 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.260163 kubelet[3483]: W0715 23:15:09.260159 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.260212 kubelet[3483]: E0715 23:15:09.260169 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.260338 kubelet[3483]: E0715 23:15:09.260325 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.260338 kubelet[3483]: W0715 23:15:09.260336 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.260401 kubelet[3483]: E0715 23:15:09.260343 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.316595 containerd[1886]: time="2025-07-15T23:15:09.316046985Z" level=info msg="connecting to shim 6beed67f8319709b32420f27e3386d2dbaeeb049b6574504a975b17fd445d55d" address="unix:///run/containerd/s/7feb167c37f998b5839d5ac0c55890f950c70acdeb1ea95669c2fb1081a452b9" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:15:09.345432 systemd[1]: Started cri-containerd-6beed67f8319709b32420f27e3386d2dbaeeb049b6574504a975b17fd445d55d.scope - libcontainer container 6beed67f8319709b32420f27e3386d2dbaeeb049b6574504a975b17fd445d55d. Jul 15 23:15:09.362400 kubelet[3483]: E0715 23:15:09.362223 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.362681 kubelet[3483]: W0715 23:15:09.362608 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.363069 kubelet[3483]: E0715 23:15:09.362984 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.363322 kubelet[3483]: E0715 23:15:09.363300 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.363322 kubelet[3483]: W0715 23:15:09.363318 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.363508 kubelet[3483]: E0715 23:15:09.363331 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.363508 kubelet[3483]: E0715 23:15:09.363470 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.363508 kubelet[3483]: W0715 23:15:09.363476 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.363508 kubelet[3483]: E0715 23:15:09.363483 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.363827 kubelet[3483]: E0715 23:15:09.363596 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.363827 kubelet[3483]: W0715 23:15:09.363602 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.363827 kubelet[3483]: E0715 23:15:09.363608 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.364332 kubelet[3483]: E0715 23:15:09.364230 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.364884 kubelet[3483]: W0715 23:15:09.364420 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.364884 kubelet[3483]: E0715 23:15:09.364499 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.365367 kubelet[3483]: E0715 23:15:09.365342 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.365367 kubelet[3483]: W0715 23:15:09.365357 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.365367 kubelet[3483]: E0715 23:15:09.365367 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.365707 containerd[1886]: time="2025-07-15T23:15:09.365670240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tjlt8,Uid:c33c4a1a-3601-46b7-8b2e-e189c38bbbec,Namespace:calico-system,Attempt:0,}" Jul 15 23:15:09.366054 kubelet[3483]: E0715 23:15:09.366036 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.367032 kubelet[3483]: W0715 23:15:09.366050 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.367032 kubelet[3483]: E0715 23:15:09.367033 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.367459 kubelet[3483]: E0715 23:15:09.367203 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.367459 kubelet[3483]: W0715 23:15:09.367212 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.367459 kubelet[3483]: E0715 23:15:09.367219 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.367459 kubelet[3483]: E0715 23:15:09.367346 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.367459 kubelet[3483]: W0715 23:15:09.367355 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.367459 kubelet[3483]: E0715 23:15:09.367361 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.367581 kubelet[3483]: E0715 23:15:09.367473 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.367581 kubelet[3483]: W0715 23:15:09.367478 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.367581 kubelet[3483]: E0715 23:15:09.367484 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.367581 kubelet[3483]: E0715 23:15:09.367568 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.367581 kubelet[3483]: W0715 23:15:09.367573 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.367581 kubelet[3483]: E0715 23:15:09.367577 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.367668 kubelet[3483]: E0715 23:15:09.367649 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.367668 kubelet[3483]: W0715 23:15:09.367653 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.367668 kubelet[3483]: E0715 23:15:09.367657 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.368358 kubelet[3483]: E0715 23:15:09.367765 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.368358 kubelet[3483]: W0715 23:15:09.367774 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.368358 kubelet[3483]: E0715 23:15:09.367780 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.368358 kubelet[3483]: E0715 23:15:09.367971 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.368358 kubelet[3483]: W0715 23:15:09.367979 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.368358 kubelet[3483]: E0715 23:15:09.367987 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.369059 kubelet[3483]: E0715 23:15:09.368447 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.369059 kubelet[3483]: W0715 23:15:09.368459 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.369059 kubelet[3483]: E0715 23:15:09.368469 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.369059 kubelet[3483]: E0715 23:15:09.368591 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.369059 kubelet[3483]: W0715 23:15:09.368597 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.369059 kubelet[3483]: E0715 23:15:09.368603 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.369059 kubelet[3483]: E0715 23:15:09.368717 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.369059 kubelet[3483]: W0715 23:15:09.368722 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.369059 kubelet[3483]: E0715 23:15:09.368728 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.370198 kubelet[3483]: E0715 23:15:09.370034 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.370198 kubelet[3483]: W0715 23:15:09.370100 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.370198 kubelet[3483]: E0715 23:15:09.370114 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.370522 kubelet[3483]: E0715 23:15:09.370307 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.370522 kubelet[3483]: W0715 23:15:09.370319 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.370522 kubelet[3483]: E0715 23:15:09.370329 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.370522 kubelet[3483]: E0715 23:15:09.370435 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.370522 kubelet[3483]: W0715 23:15:09.370441 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.370522 kubelet[3483]: E0715 23:15:09.370447 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.370522 kubelet[3483]: E0715 23:15:09.370574 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.370522 kubelet[3483]: W0715 23:15:09.370583 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.370522 kubelet[3483]: E0715 23:15:09.370590 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.372077 kubelet[3483]: E0715 23:15:09.371815 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.372077 kubelet[3483]: W0715 23:15:09.371829 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.372077 kubelet[3483]: E0715 23:15:09.371840 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.372667 kubelet[3483]: E0715 23:15:09.372597 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.373015 kubelet[3483]: W0715 23:15:09.372799 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.373015 kubelet[3483]: E0715 23:15:09.372879 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.373677 kubelet[3483]: E0715 23:15:09.373660 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.374357 kubelet[3483]: W0715 23:15:09.373733 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.374357 kubelet[3483]: E0715 23:15:09.373783 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.374708 kubelet[3483]: E0715 23:15:09.374641 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.374708 kubelet[3483]: W0715 23:15:09.374673 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.374708 kubelet[3483]: E0715 23:15:09.374686 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.388075 kubelet[3483]: E0715 23:15:09.388040 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:09.388075 kubelet[3483]: W0715 23:15:09.388060 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:09.388075 kubelet[3483]: E0715 23:15:09.388079 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:09.417252 containerd[1886]: time="2025-07-15T23:15:09.416977814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-85759bdcc7-mx6z7,Uid:976814ed-338c-4d15-980c-0cd195598a14,Namespace:calico-system,Attempt:0,} returns sandbox id \"6beed67f8319709b32420f27e3386d2dbaeeb049b6574504a975b17fd445d55d\"" Jul 15 23:15:09.420209 containerd[1886]: time="2025-07-15T23:15:09.419985418Z" level=info msg="connecting to shim 7e78f10e50a411ccffdd24cb0aa745731c09a21cef5d203311b2de93d72a99cb" address="unix:///run/containerd/s/70098d13e5c23dc5b7425d8c39e301c1e8924d041487634afe1601ffb4a4a90a" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:15:09.422857 containerd[1886]: time="2025-07-15T23:15:09.421628511Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 15 23:15:09.436544 systemd[1]: Started cri-containerd-7e78f10e50a411ccffdd24cb0aa745731c09a21cef5d203311b2de93d72a99cb.scope - libcontainer container 7e78f10e50a411ccffdd24cb0aa745731c09a21cef5d203311b2de93d72a99cb. Jul 15 23:15:09.477646 containerd[1886]: time="2025-07-15T23:15:09.477614895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tjlt8,Uid:c33c4a1a-3601-46b7-8b2e-e189c38bbbec,Namespace:calico-system,Attempt:0,} returns sandbox id \"7e78f10e50a411ccffdd24cb0aa745731c09a21cef5d203311b2de93d72a99cb\"" Jul 15 23:15:10.646151 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3005402004.mount: Deactivated successfully. Jul 15 23:15:10.953765 kubelet[3483]: E0715 23:15:10.953503 3483 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dhhj5" podUID="ebad4995-7e5b-4942-987a-9ae9ac290621" Jul 15 23:15:11.057784 containerd[1886]: time="2025-07-15T23:15:11.057729403Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:15:11.060071 containerd[1886]: time="2025-07-15T23:15:11.059922319Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 15 23:15:11.063413 containerd[1886]: time="2025-07-15T23:15:11.063347453Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:15:11.075081 containerd[1886]: time="2025-07-15T23:15:11.075015221Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:15:11.075479 containerd[1886]: time="2025-07-15T23:15:11.075451121Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.65379152s" Jul 15 23:15:11.075479 containerd[1886]: time="2025-07-15T23:15:11.075482537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 15 23:15:11.077753 containerd[1886]: time="2025-07-15T23:15:11.077548538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 15 23:15:11.094832 containerd[1886]: time="2025-07-15T23:15:11.094529171Z" level=info msg="CreateContainer within sandbox \"6beed67f8319709b32420f27e3386d2dbaeeb049b6574504a975b17fd445d55d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 15 23:15:11.126576 containerd[1886]: time="2025-07-15T23:15:11.125939208Z" level=info msg="Container b80e4e733d5074ff33efda3b41c858b8a4f6e9c86b0caac8bd8c500ae4efbe3c: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:15:11.147672 containerd[1886]: time="2025-07-15T23:15:11.147619754Z" level=info msg="CreateContainer within sandbox \"6beed67f8319709b32420f27e3386d2dbaeeb049b6574504a975b17fd445d55d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b80e4e733d5074ff33efda3b41c858b8a4f6e9c86b0caac8bd8c500ae4efbe3c\"" Jul 15 23:15:11.148498 containerd[1886]: time="2025-07-15T23:15:11.148457097Z" level=info msg="StartContainer for \"b80e4e733d5074ff33efda3b41c858b8a4f6e9c86b0caac8bd8c500ae4efbe3c\"" Jul 15 23:15:11.149755 containerd[1886]: time="2025-07-15T23:15:11.149530446Z" level=info msg="connecting to shim b80e4e733d5074ff33efda3b41c858b8a4f6e9c86b0caac8bd8c500ae4efbe3c" address="unix:///run/containerd/s/7feb167c37f998b5839d5ac0c55890f950c70acdeb1ea95669c2fb1081a452b9" protocol=ttrpc version=3 Jul 15 23:15:11.168665 systemd[1]: Started cri-containerd-b80e4e733d5074ff33efda3b41c858b8a4f6e9c86b0caac8bd8c500ae4efbe3c.scope - libcontainer container b80e4e733d5074ff33efda3b41c858b8a4f6e9c86b0caac8bd8c500ae4efbe3c. Jul 15 23:15:11.201551 containerd[1886]: time="2025-07-15T23:15:11.201377731Z" level=info msg="StartContainer for \"b80e4e733d5074ff33efda3b41c858b8a4f6e9c86b0caac8bd8c500ae4efbe3c\" returns successfully" Jul 15 23:15:12.065877 kubelet[3483]: E0715 23:15:12.065678 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:12.065877 kubelet[3483]: W0715 23:15:12.065817 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:12.065877 kubelet[3483]: E0715 23:15:12.065839 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:12.066627 kubelet[3483]: I0715 23:15:12.066510 3483 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-85759bdcc7-mx6z7" podStartSLOduration=2.41047804 podStartE2EDuration="4.066496221s" podCreationTimestamp="2025-07-15 23:15:08 +0000 UTC" firstStartedPulling="2025-07-15 23:15:09.420815721 +0000 UTC m=+20.548344781" lastFinishedPulling="2025-07-15 23:15:11.076833902 +0000 UTC m=+22.204362962" observedRunningTime="2025-07-15 23:15:12.065354558 +0000 UTC m=+23.192883626" watchObservedRunningTime="2025-07-15 23:15:12.066496221 +0000 UTC m=+23.194025281" Jul 15 23:15:12.067550 kubelet[3483]: E0715 23:15:12.067158 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:12.067550 kubelet[3483]: W0715 23:15:12.067173 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:12.067550 kubelet[3483]: E0715 23:15:12.067189 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:12.067550 kubelet[3483]: E0715 23:15:12.067399 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:12.067550 kubelet[3483]: W0715 23:15:12.067408 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:12.067550 kubelet[3483]: E0715 23:15:12.067417 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:12.068379 kubelet[3483]: E0715 23:15:12.068011 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:12.068379 kubelet[3483]: W0715 23:15:12.068025 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:12.068379 kubelet[3483]: E0715 23:15:12.068037 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:12.068709 kubelet[3483]: E0715 23:15:12.068681 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:12.068932 kubelet[3483]: W0715 23:15:12.068738 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:12.068932 kubelet[3483]: E0715 23:15:12.068867 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:12.069290 kubelet[3483]: E0715 23:15:12.069197 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:12.069290 kubelet[3483]: W0715 23:15:12.069210 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:12.069290 kubelet[3483]: E0715 23:15:12.069220 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:12.069680 kubelet[3483]: E0715 23:15:12.069653 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:12.070174 kubelet[3483]: W0715 23:15:12.069797 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:12.070342 kubelet[3483]: E0715 23:15:12.070248 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:12.070502 kubelet[3483]: E0715 23:15:12.070491 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:12.070792 kubelet[3483]: W0715 23:15:12.070547 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:12.070792 kubelet[3483]: E0715 23:15:12.070559 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:12.071014 kubelet[3483]: E0715 23:15:12.070946 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:12.071243 kubelet[3483]: W0715 23:15:12.071084 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:12.071243 kubelet[3483]: E0715 23:15:12.071102 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:12.071610 kubelet[3483]: E0715 23:15:12.071461 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:12.071781 kubelet[3483]: W0715 23:15:12.071677 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:12.071781 kubelet[3483]: E0715 23:15:12.071696 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:12.071974 kubelet[3483]: E0715 23:15:12.071963 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:12.072035 kubelet[3483]: W0715 23:15:12.072025 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:12.072083 kubelet[3483]: E0715 23:15:12.072072 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:12.072315 kubelet[3483]: E0715 23:15:12.072302 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:12.072387 kubelet[3483]: W0715 23:15:12.072376 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:12.072441 kubelet[3483]: E0715 23:15:12.072430 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:12.072712 kubelet[3483]: E0715 23:15:12.072626 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:12.072712 kubelet[3483]: W0715 23:15:12.072637 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:12.072712 kubelet[3483]: E0715 23:15:12.072646 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:12.072837 kubelet[3483]: E0715 23:15:12.072828 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:12.072887 kubelet[3483]: W0715 23:15:12.072878 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:12.073007 kubelet[3483]: E0715 23:15:12.072921 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:12.073101 kubelet[3483]: E0715 23:15:12.073091 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:12.073151 kubelet[3483]: W0715 23:15:12.073142 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:12.073242 kubelet[3483]: E0715 23:15:12.073186 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:12.090523 kubelet[3483]: E0715 23:15:12.090495 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:12.090523 kubelet[3483]: W0715 23:15:12.090518 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:12.090644 kubelet[3483]: E0715 23:15:12.090536 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:12.090710 kubelet[3483]: E0715 23:15:12.090693 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:12.090710 kubelet[3483]: W0715 23:15:12.090704 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:12.090899 kubelet[3483]: E0715 23:15:12.090713 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:12.090984 kubelet[3483]: E0715 23:15:12.090966 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:12.091035 kubelet[3483]: W0715 23:15:12.091024 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:12.091084 kubelet[3483]: E0715 23:15:12.091074 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:12.091317 kubelet[3483]: E0715 23:15:12.091304 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:12.091540 kubelet[3483]: W0715 23:15:12.091435 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:12.091540 kubelet[3483]: E0715 23:15:12.091452 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:12.091660 kubelet[3483]: E0715 23:15:12.091649 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:12.091709 kubelet[3483]: W0715 23:15:12.091699 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:12.091783 kubelet[3483]: E0715 23:15:12.091754 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:12.092062 kubelet[3483]: E0715 23:15:12.091979 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:12.092062 kubelet[3483]: W0715 23:15:12.091990 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:12.092062 kubelet[3483]: E0715 23:15:12.091999 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:12.092238 kubelet[3483]: E0715 23:15:12.092227 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:12.092402 kubelet[3483]: W0715 23:15:12.092300 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:12.092402 kubelet[3483]: E0715 23:15:12.092314 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:12.092571 kubelet[3483]: E0715 23:15:12.092558 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:12.092623 kubelet[3483]: W0715 23:15:12.092613 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:12.092746 kubelet[3483]: E0715 23:15:12.092660 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:12.092839 kubelet[3483]: E0715 23:15:12.092827 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:12.092886 kubelet[3483]: W0715 23:15:12.092877 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:12.093031 kubelet[3483]: E0715 23:15:12.092920 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:12.093133 kubelet[3483]: E0715 23:15:12.093122 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:12.093186 kubelet[3483]: W0715 23:15:12.093176 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:12.093233 kubelet[3483]: E0715 23:15:12.093222 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:12.093508 kubelet[3483]: E0715 23:15:12.093422 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:12.093508 kubelet[3483]: W0715 23:15:12.093433 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:12.093508 kubelet[3483]: E0715 23:15:12.093442 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:12.093673 kubelet[3483]: E0715 23:15:12.093660 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:12.093726 kubelet[3483]: W0715 23:15:12.093716 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:12.093768 kubelet[3483]: E0715 23:15:12.093759 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:12.093977 kubelet[3483]: E0715 23:15:12.093965 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:12.094115 kubelet[3483]: W0715 23:15:12.094032 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:12.094115 kubelet[3483]: E0715 23:15:12.094044 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:12.094303 kubelet[3483]: E0715 23:15:12.094272 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:12.094459 kubelet[3483]: W0715 23:15:12.094363 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:12.094459 kubelet[3483]: E0715 23:15:12.094380 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:12.094566 kubelet[3483]: E0715 23:15:12.094556 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:12.094614 kubelet[3483]: W0715 23:15:12.094604 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:12.094655 kubelet[3483]: E0715 23:15:12.094646 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:12.095050 kubelet[3483]: E0715 23:15:12.094902 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:12.095050 kubelet[3483]: W0715 23:15:12.094913 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:12.095050 kubelet[3483]: E0715 23:15:12.094922 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:12.095133 kubelet[3483]: E0715 23:15:12.095116 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:12.095133 kubelet[3483]: W0715 23:15:12.095127 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:12.095179 kubelet[3483]: E0715 23:15:12.095137 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:12.095287 kubelet[3483]: E0715 23:15:12.095265 3483 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:15:12.095318 kubelet[3483]: W0715 23:15:12.095283 3483 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:15:12.095318 kubelet[3483]: E0715 23:15:12.095300 3483 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:15:12.391397 containerd[1886]: time="2025-07-15T23:15:12.390883278Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:15:12.404862 containerd[1886]: time="2025-07-15T23:15:12.404812931Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 15 23:15:12.408596 containerd[1886]: time="2025-07-15T23:15:12.408567450Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:15:12.413862 containerd[1886]: time="2025-07-15T23:15:12.413819290Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.336242735s" Jul 15 23:15:12.413862 containerd[1886]: time="2025-07-15T23:15:12.413857355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 15 23:15:12.414103 containerd[1886]: time="2025-07-15T23:15:12.414061025Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:15:12.421308 containerd[1886]: time="2025-07-15T23:15:12.420986415Z" level=info msg="CreateContainer within sandbox \"7e78f10e50a411ccffdd24cb0aa745731c09a21cef5d203311b2de93d72a99cb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 15 23:15:12.462234 containerd[1886]: time="2025-07-15T23:15:12.461501661Z" level=info msg="Container 8f2c5480316fcefdebd39bea1f55eb849a57d368b3d390ef5a84ae97c5e1d5c3: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:15:12.489178 containerd[1886]: time="2025-07-15T23:15:12.489124714Z" level=info msg="CreateContainer within sandbox \"7e78f10e50a411ccffdd24cb0aa745731c09a21cef5d203311b2de93d72a99cb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8f2c5480316fcefdebd39bea1f55eb849a57d368b3d390ef5a84ae97c5e1d5c3\"" Jul 15 23:15:12.489926 containerd[1886]: time="2025-07-15T23:15:12.489898095Z" level=info msg="StartContainer for \"8f2c5480316fcefdebd39bea1f55eb849a57d368b3d390ef5a84ae97c5e1d5c3\"" Jul 15 23:15:12.491302 containerd[1886]: time="2025-07-15T23:15:12.491220939Z" level=info msg="connecting to shim 8f2c5480316fcefdebd39bea1f55eb849a57d368b3d390ef5a84ae97c5e1d5c3" address="unix:///run/containerd/s/70098d13e5c23dc5b7425d8c39e301c1e8924d041487634afe1601ffb4a4a90a" protocol=ttrpc version=3 Jul 15 23:15:12.508433 systemd[1]: Started cri-containerd-8f2c5480316fcefdebd39bea1f55eb849a57d368b3d390ef5a84ae97c5e1d5c3.scope - libcontainer container 8f2c5480316fcefdebd39bea1f55eb849a57d368b3d390ef5a84ae97c5e1d5c3. Jul 15 23:15:12.545536 containerd[1886]: time="2025-07-15T23:15:12.545488986Z" level=info msg="StartContainer for \"8f2c5480316fcefdebd39bea1f55eb849a57d368b3d390ef5a84ae97c5e1d5c3\" returns successfully" Jul 15 23:15:12.550746 systemd[1]: cri-containerd-8f2c5480316fcefdebd39bea1f55eb849a57d368b3d390ef5a84ae97c5e1d5c3.scope: Deactivated successfully. Jul 15 23:15:12.553018 containerd[1886]: time="2025-07-15T23:15:12.552977935Z" level=info msg="received exit event container_id:\"8f2c5480316fcefdebd39bea1f55eb849a57d368b3d390ef5a84ae97c5e1d5c3\" id:\"8f2c5480316fcefdebd39bea1f55eb849a57d368b3d390ef5a84ae97c5e1d5c3\" pid:4155 exited_at:{seconds:1752621312 nanos:552602869}" Jul 15 23:15:12.553133 containerd[1886]: time="2025-07-15T23:15:12.553109867Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8f2c5480316fcefdebd39bea1f55eb849a57d368b3d390ef5a84ae97c5e1d5c3\" id:\"8f2c5480316fcefdebd39bea1f55eb849a57d368b3d390ef5a84ae97c5e1d5c3\" pid:4155 exited_at:{seconds:1752621312 nanos:552602869}" Jul 15 23:15:12.576158 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f2c5480316fcefdebd39bea1f55eb849a57d368b3d390ef5a84ae97c5e1d5c3-rootfs.mount: Deactivated successfully. Jul 15 23:15:12.952300 kubelet[3483]: E0715 23:15:12.952248 3483 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dhhj5" podUID="ebad4995-7e5b-4942-987a-9ae9ac290621" Jul 15 23:15:13.049118 kubelet[3483]: I0715 23:15:13.048817 3483 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:15:14.054054 containerd[1886]: time="2025-07-15T23:15:14.054009722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 15 23:15:14.952921 kubelet[3483]: E0715 23:15:14.952856 3483 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dhhj5" podUID="ebad4995-7e5b-4942-987a-9ae9ac290621" Jul 15 23:15:16.254747 containerd[1886]: time="2025-07-15T23:15:16.254696208Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:15:16.257427 containerd[1886]: time="2025-07-15T23:15:16.257385026Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 15 23:15:16.263021 containerd[1886]: time="2025-07-15T23:15:16.262960419Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:15:16.267062 containerd[1886]: time="2025-07-15T23:15:16.267010170Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:15:16.267473 containerd[1886]: time="2025-07-15T23:15:16.267343939Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.213190077s" Jul 15 23:15:16.267473 containerd[1886]: time="2025-07-15T23:15:16.267373420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 15 23:15:16.274027 containerd[1886]: time="2025-07-15T23:15:16.273978873Z" level=info msg="CreateContainer within sandbox \"7e78f10e50a411ccffdd24cb0aa745731c09a21cef5d203311b2de93d72a99cb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 15 23:15:16.299871 containerd[1886]: time="2025-07-15T23:15:16.299060040Z" level=info msg="Container 9cc6c942eefa4320c30f9e8747f67fe66c094567add5448056cd262dd286c878: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:15:16.318473 containerd[1886]: time="2025-07-15T23:15:16.318430379Z" level=info msg="CreateContainer within sandbox \"7e78f10e50a411ccffdd24cb0aa745731c09a21cef5d203311b2de93d72a99cb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9cc6c942eefa4320c30f9e8747f67fe66c094567add5448056cd262dd286c878\"" Jul 15 23:15:16.319182 containerd[1886]: time="2025-07-15T23:15:16.319158198Z" level=info msg="StartContainer for \"9cc6c942eefa4320c30f9e8747f67fe66c094567add5448056cd262dd286c878\"" Jul 15 23:15:16.320162 containerd[1886]: time="2025-07-15T23:15:16.320139553Z" level=info msg="connecting to shim 9cc6c942eefa4320c30f9e8747f67fe66c094567add5448056cd262dd286c878" address="unix:///run/containerd/s/70098d13e5c23dc5b7425d8c39e301c1e8924d041487634afe1601ffb4a4a90a" protocol=ttrpc version=3 Jul 15 23:15:16.344436 systemd[1]: Started cri-containerd-9cc6c942eefa4320c30f9e8747f67fe66c094567add5448056cd262dd286c878.scope - libcontainer container 9cc6c942eefa4320c30f9e8747f67fe66c094567add5448056cd262dd286c878. Jul 15 23:15:16.376645 containerd[1886]: time="2025-07-15T23:15:16.376601461Z" level=info msg="StartContainer for \"9cc6c942eefa4320c30f9e8747f67fe66c094567add5448056cd262dd286c878\" returns successfully" Jul 15 23:15:16.952944 kubelet[3483]: E0715 23:15:16.952498 3483 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dhhj5" podUID="ebad4995-7e5b-4942-987a-9ae9ac290621" Jul 15 23:15:17.405558 containerd[1886]: time="2025-07-15T23:15:17.405489206Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 23:15:17.407856 systemd[1]: cri-containerd-9cc6c942eefa4320c30f9e8747f67fe66c094567add5448056cd262dd286c878.scope: Deactivated successfully. Jul 15 23:15:17.408330 systemd[1]: cri-containerd-9cc6c942eefa4320c30f9e8747f67fe66c094567add5448056cd262dd286c878.scope: Consumed 316ms CPU time, 186M memory peak, 165.8M written to disk. Jul 15 23:15:17.411072 containerd[1886]: time="2025-07-15T23:15:17.411029389Z" level=info msg="received exit event container_id:\"9cc6c942eefa4320c30f9e8747f67fe66c094567add5448056cd262dd286c878\" id:\"9cc6c942eefa4320c30f9e8747f67fe66c094567add5448056cd262dd286c878\" pid:4216 exited_at:{seconds:1752621317 nanos:410664203}" Jul 15 23:15:17.411363 containerd[1886]: time="2025-07-15T23:15:17.411253980Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9cc6c942eefa4320c30f9e8747f67fe66c094567add5448056cd262dd286c878\" id:\"9cc6c942eefa4320c30f9e8747f67fe66c094567add5448056cd262dd286c878\" pid:4216 exited_at:{seconds:1752621317 nanos:410664203}" Jul 15 23:15:17.426126 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9cc6c942eefa4320c30f9e8747f67fe66c094567add5448056cd262dd286c878-rootfs.mount: Deactivated successfully. Jul 15 23:15:17.491033 kubelet[3483]: I0715 23:15:17.491000 3483 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 15 23:15:18.254380 systemd[1]: Created slice kubepods-burstable-pod379e70c3_12ee_428a_a67d_5cf64f2aa25c.slice - libcontainer container kubepods-burstable-pod379e70c3_12ee_428a_a67d_5cf64f2aa25c.slice. Jul 15 23:15:18.275630 systemd[1]: Created slice kubepods-burstable-poda83216d8_36e6_4672_9453_104ead949734.slice - libcontainer container kubepods-burstable-poda83216d8_36e6_4672_9453_104ead949734.slice. Jul 15 23:15:18.286931 systemd[1]: Created slice kubepods-besteffort-podc83f766e_eaa7_4861_9802_86c69a17d315.slice - libcontainer container kubepods-besteffort-podc83f766e_eaa7_4861_9802_86c69a17d315.slice. Jul 15 23:15:18.294725 systemd[1]: Created slice kubepods-besteffort-pod14848f7b_7d37_411e_b6ae_df88a913bfb4.slice - libcontainer container kubepods-besteffort-pod14848f7b_7d37_411e_b6ae_df88a913bfb4.slice. Jul 15 23:15:18.302464 systemd[1]: Created slice kubepods-besteffort-pod35baaac4_a554_4613_a44f_3a7d53ede3f7.slice - libcontainer container kubepods-besteffort-pod35baaac4_a554_4613_a44f_3a7d53ede3f7.slice. Jul 15 23:15:18.307966 systemd[1]: Created slice kubepods-besteffort-podb8795292_5562_4461_a9bd_45b3ce7229be.slice - libcontainer container kubepods-besteffort-podb8795292_5562_4461_a9bd_45b3ce7229be.slice. Jul 15 23:15:18.312679 systemd[1]: Created slice kubepods-besteffort-podebad4995_7e5b_4942_987a_9ae9ac290621.slice - libcontainer container kubepods-besteffort-podebad4995_7e5b_4942_987a_9ae9ac290621.slice. Jul 15 23:15:18.320945 systemd[1]: Created slice kubepods-besteffort-pod774e156b_11dc_4cb5_ba54_e8738bfac49c.slice - libcontainer container kubepods-besteffort-pod774e156b_11dc_4cb5_ba54_e8738bfac49c.slice. Jul 15 23:15:18.321688 containerd[1886]: time="2025-07-15T23:15:18.321246349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dhhj5,Uid:ebad4995-7e5b-4942-987a-9ae9ac290621,Namespace:calico-system,Attempt:0,}" Jul 15 23:15:18.332986 kubelet[3483]: I0715 23:15:18.332952 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a83216d8-36e6-4672-9453-104ead949734-config-volume\") pod \"coredns-674b8bbfcf-nq752\" (UID: \"a83216d8-36e6-4672-9453-104ead949734\") " pod="kube-system/coredns-674b8bbfcf-nq752" Jul 15 23:15:18.334362 kubelet[3483]: I0715 23:15:18.334333 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/35baaac4-a554-4613-a44f-3a7d53ede3f7-calico-apiserver-certs\") pod \"calico-apiserver-5684c645d7-7hkgz\" (UID: \"35baaac4-a554-4613-a44f-3a7d53ede3f7\") " pod="calico-apiserver/calico-apiserver-5684c645d7-7hkgz" Jul 15 23:15:18.335516 kubelet[3483]: I0715 23:15:18.334384 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwl2s\" (UniqueName: \"kubernetes.io/projected/35baaac4-a554-4613-a44f-3a7d53ede3f7-kube-api-access-xwl2s\") pod \"calico-apiserver-5684c645d7-7hkgz\" (UID: \"35baaac4-a554-4613-a44f-3a7d53ede3f7\") " pod="calico-apiserver/calico-apiserver-5684c645d7-7hkgz" Jul 15 23:15:18.335516 kubelet[3483]: I0715 23:15:18.334434 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/774e156b-11dc-4cb5-ba54-e8738bfac49c-whisker-ca-bundle\") pod \"whisker-8bccd948d-vhplr\" (UID: \"774e156b-11dc-4cb5-ba54-e8738bfac49c\") " pod="calico-system/whisker-8bccd948d-vhplr" Jul 15 23:15:18.335516 kubelet[3483]: I0715 23:15:18.334473 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd2gk\" (UniqueName: \"kubernetes.io/projected/c83f766e-eaa7-4861-9802-86c69a17d315-kube-api-access-nd2gk\") pod \"calico-kube-controllers-69f8c8b5f9-754zd\" (UID: \"c83f766e-eaa7-4861-9802-86c69a17d315\") " pod="calico-system/calico-kube-controllers-69f8c8b5f9-754zd" Jul 15 23:15:18.335516 kubelet[3483]: I0715 23:15:18.334486 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b8795292-5562-4461-a9bd-45b3ce7229be-calico-apiserver-certs\") pod \"calico-apiserver-5684c645d7-ljgbp\" (UID: \"b8795292-5562-4461-a9bd-45b3ce7229be\") " pod="calico-apiserver/calico-apiserver-5684c645d7-ljgbp" Jul 15 23:15:18.335516 kubelet[3483]: I0715 23:15:18.334507 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/774e156b-11dc-4cb5-ba54-e8738bfac49c-whisker-backend-key-pair\") pod \"whisker-8bccd948d-vhplr\" (UID: \"774e156b-11dc-4cb5-ba54-e8738bfac49c\") " pod="calico-system/whisker-8bccd948d-vhplr" Jul 15 23:15:18.335701 kubelet[3483]: I0715 23:15:18.334521 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c83f766e-eaa7-4861-9802-86c69a17d315-tigera-ca-bundle\") pod \"calico-kube-controllers-69f8c8b5f9-754zd\" (UID: \"c83f766e-eaa7-4861-9802-86c69a17d315\") " pod="calico-system/calico-kube-controllers-69f8c8b5f9-754zd" Jul 15 23:15:18.335701 kubelet[3483]: I0715 23:15:18.334533 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vxhh\" (UniqueName: \"kubernetes.io/projected/774e156b-11dc-4cb5-ba54-e8738bfac49c-kube-api-access-7vxhh\") pod \"whisker-8bccd948d-vhplr\" (UID: \"774e156b-11dc-4cb5-ba54-e8738bfac49c\") " pod="calico-system/whisker-8bccd948d-vhplr" Jul 15 23:15:18.335701 kubelet[3483]: I0715 23:15:18.334543 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/14848f7b-7d37-411e-b6ae-df88a913bfb4-goldmane-key-pair\") pod \"goldmane-768f4c5c69-97d7l\" (UID: \"14848f7b-7d37-411e-b6ae-df88a913bfb4\") " pod="calico-system/goldmane-768f4c5c69-97d7l" Jul 15 23:15:18.335701 kubelet[3483]: I0715 23:15:18.334555 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqdrq\" (UniqueName: \"kubernetes.io/projected/a83216d8-36e6-4672-9453-104ead949734-kube-api-access-tqdrq\") pod \"coredns-674b8bbfcf-nq752\" (UID: \"a83216d8-36e6-4672-9453-104ead949734\") " pod="kube-system/coredns-674b8bbfcf-nq752" Jul 15 23:15:18.335701 kubelet[3483]: I0715 23:15:18.334567 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14848f7b-7d37-411e-b6ae-df88a913bfb4-config\") pod \"goldmane-768f4c5c69-97d7l\" (UID: \"14848f7b-7d37-411e-b6ae-df88a913bfb4\") " pod="calico-system/goldmane-768f4c5c69-97d7l" Jul 15 23:15:18.335779 kubelet[3483]: I0715 23:15:18.334576 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh4pm\" (UniqueName: \"kubernetes.io/projected/14848f7b-7d37-411e-b6ae-df88a913bfb4-kube-api-access-sh4pm\") pod \"goldmane-768f4c5c69-97d7l\" (UID: \"14848f7b-7d37-411e-b6ae-df88a913bfb4\") " pod="calico-system/goldmane-768f4c5c69-97d7l" Jul 15 23:15:18.335779 kubelet[3483]: I0715 23:15:18.334595 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwzrs\" (UniqueName: \"kubernetes.io/projected/b8795292-5562-4461-a9bd-45b3ce7229be-kube-api-access-mwzrs\") pod \"calico-apiserver-5684c645d7-ljgbp\" (UID: \"b8795292-5562-4461-a9bd-45b3ce7229be\") " pod="calico-apiserver/calico-apiserver-5684c645d7-ljgbp" Jul 15 23:15:18.335779 kubelet[3483]: I0715 23:15:18.334607 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/379e70c3-12ee-428a-a67d-5cf64f2aa25c-config-volume\") pod \"coredns-674b8bbfcf-vfkqv\" (UID: \"379e70c3-12ee-428a-a67d-5cf64f2aa25c\") " pod="kube-system/coredns-674b8bbfcf-vfkqv" Jul 15 23:15:18.335779 kubelet[3483]: I0715 23:15:18.334616 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxzlw\" (UniqueName: \"kubernetes.io/projected/379e70c3-12ee-428a-a67d-5cf64f2aa25c-kube-api-access-wxzlw\") pod \"coredns-674b8bbfcf-vfkqv\" (UID: \"379e70c3-12ee-428a-a67d-5cf64f2aa25c\") " pod="kube-system/coredns-674b8bbfcf-vfkqv" Jul 15 23:15:18.335779 kubelet[3483]: I0715 23:15:18.334625 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14848f7b-7d37-411e-b6ae-df88a913bfb4-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-97d7l\" (UID: \"14848f7b-7d37-411e-b6ae-df88a913bfb4\") " pod="calico-system/goldmane-768f4c5c69-97d7l" Jul 15 23:15:18.377800 containerd[1886]: time="2025-07-15T23:15:18.377740873Z" level=error msg="Failed to destroy network for sandbox \"b55795cca4ad0962cd458dcd5f26e6f81112bd91351dba88817ee2bb08548e3c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:15:18.379219 systemd[1]: run-netns-cni\x2df8dc9047\x2d0516\x2da24a\x2d4f08\x2d9a7deb2d3005.mount: Deactivated successfully. Jul 15 23:15:18.383036 containerd[1886]: time="2025-07-15T23:15:18.382953642Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dhhj5,Uid:ebad4995-7e5b-4942-987a-9ae9ac290621,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b55795cca4ad0962cd458dcd5f26e6f81112bd91351dba88817ee2bb08548e3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:15:18.383295 kubelet[3483]: E0715 23:15:18.383194 3483 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b55795cca4ad0962cd458dcd5f26e6f81112bd91351dba88817ee2bb08548e3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:15:18.383343 kubelet[3483]: E0715 23:15:18.383324 3483 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b55795cca4ad0962cd458dcd5f26e6f81112bd91351dba88817ee2bb08548e3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dhhj5" Jul 15 23:15:18.383367 kubelet[3483]: E0715 23:15:18.383341 3483 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b55795cca4ad0962cd458dcd5f26e6f81112bd91351dba88817ee2bb08548e3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dhhj5" Jul 15 23:15:18.383477 kubelet[3483]: E0715 23:15:18.383396 3483 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dhhj5_calico-system(ebad4995-7e5b-4942-987a-9ae9ac290621)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dhhj5_calico-system(ebad4995-7e5b-4942-987a-9ae9ac290621)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b55795cca4ad0962cd458dcd5f26e6f81112bd91351dba88817ee2bb08548e3c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dhhj5" podUID="ebad4995-7e5b-4942-987a-9ae9ac290621" Jul 15 23:15:18.558590 containerd[1886]: time="2025-07-15T23:15:18.558464695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vfkqv,Uid:379e70c3-12ee-428a-a67d-5cf64f2aa25c,Namespace:kube-system,Attempt:0,}" Jul 15 23:15:18.583295 containerd[1886]: time="2025-07-15T23:15:18.583069620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nq752,Uid:a83216d8-36e6-4672-9453-104ead949734,Namespace:kube-system,Attempt:0,}" Jul 15 23:15:18.592961 containerd[1886]: time="2025-07-15T23:15:18.592805450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69f8c8b5f9-754zd,Uid:c83f766e-eaa7-4861-9802-86c69a17d315,Namespace:calico-system,Attempt:0,}" Jul 15 23:15:18.600332 containerd[1886]: time="2025-07-15T23:15:18.600291819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-97d7l,Uid:14848f7b-7d37-411e-b6ae-df88a913bfb4,Namespace:calico-system,Attempt:0,}" Jul 15 23:15:18.606499 containerd[1886]: time="2025-07-15T23:15:18.606405645Z" level=error msg="Failed to destroy network for sandbox \"b80c8ecc546828a8d9e59821dcd5e0167f0770d8c580b024b201110ecd4cd43a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:15:18.607617 containerd[1886]: time="2025-07-15T23:15:18.607556013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5684c645d7-7hkgz,Uid:35baaac4-a554-4613-a44f-3a7d53ede3f7,Namespace:calico-apiserver,Attempt:0,}" Jul 15 23:15:18.611602 containerd[1886]: time="2025-07-15T23:15:18.611500547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5684c645d7-ljgbp,Uid:b8795292-5562-4461-a9bd-45b3ce7229be,Namespace:calico-apiserver,Attempt:0,}" Jul 15 23:15:18.619957 containerd[1886]: time="2025-07-15T23:15:18.619771793Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vfkqv,Uid:379e70c3-12ee-428a-a67d-5cf64f2aa25c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b80c8ecc546828a8d9e59821dcd5e0167f0770d8c580b024b201110ecd4cd43a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:15:18.620683 kubelet[3483]: E0715 23:15:18.620621 3483 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b80c8ecc546828a8d9e59821dcd5e0167f0770d8c580b024b201110ecd4cd43a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:15:18.621699 kubelet[3483]: E0715 23:15:18.620781 3483 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b80c8ecc546828a8d9e59821dcd5e0167f0770d8c580b024b201110ecd4cd43a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vfkqv" Jul 15 23:15:18.621699 kubelet[3483]: E0715 23:15:18.620810 3483 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b80c8ecc546828a8d9e59821dcd5e0167f0770d8c580b024b201110ecd4cd43a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vfkqv" Jul 15 23:15:18.621699 kubelet[3483]: E0715 23:15:18.621377 3483 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-vfkqv_kube-system(379e70c3-12ee-428a-a67d-5cf64f2aa25c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-vfkqv_kube-system(379e70c3-12ee-428a-a67d-5cf64f2aa25c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b80c8ecc546828a8d9e59821dcd5e0167f0770d8c580b024b201110ecd4cd43a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-vfkqv" podUID="379e70c3-12ee-428a-a67d-5cf64f2aa25c" Jul 15 23:15:18.626618 containerd[1886]: time="2025-07-15T23:15:18.626468059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8bccd948d-vhplr,Uid:774e156b-11dc-4cb5-ba54-e8738bfac49c,Namespace:calico-system,Attempt:0,}" Jul 15 23:15:18.644512 containerd[1886]: time="2025-07-15T23:15:18.644428039Z" level=error msg="Failed to destroy network for sandbox \"2fa638a339e483962d016ec4e6fa16ba686ff8ee59e549fc6691a8af856373d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:15:18.671374 containerd[1886]: time="2025-07-15T23:15:18.671324132Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nq752,Uid:a83216d8-36e6-4672-9453-104ead949734,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fa638a339e483962d016ec4e6fa16ba686ff8ee59e549fc6691a8af856373d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:15:18.671800 kubelet[3483]: E0715 23:15:18.671753 3483 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fa638a339e483962d016ec4e6fa16ba686ff8ee59e549fc6691a8af856373d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:15:18.671889 kubelet[3483]: E0715 23:15:18.671822 3483 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fa638a339e483962d016ec4e6fa16ba686ff8ee59e549fc6691a8af856373d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nq752" Jul 15 23:15:18.671889 kubelet[3483]: E0715 23:15:18.671844 3483 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fa638a339e483962d016ec4e6fa16ba686ff8ee59e549fc6691a8af856373d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nq752" Jul 15 23:15:18.671998 kubelet[3483]: E0715 23:15:18.671889 3483 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-nq752_kube-system(a83216d8-36e6-4672-9453-104ead949734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-nq752_kube-system(a83216d8-36e6-4672-9453-104ead949734)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2fa638a339e483962d016ec4e6fa16ba686ff8ee59e549fc6691a8af856373d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-nq752" podUID="a83216d8-36e6-4672-9453-104ead949734" Jul 15 23:15:18.699863 containerd[1886]: time="2025-07-15T23:15:18.699818172Z" level=error msg="Failed to destroy network for sandbox \"fbc91cb1248140b94f986888c2c4d38559dc54e2a0ee2b405ff4f3b88e20d926\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:15:18.704574 containerd[1886]: time="2025-07-15T23:15:18.704515231Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69f8c8b5f9-754zd,Uid:c83f766e-eaa7-4861-9802-86c69a17d315,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbc91cb1248140b94f986888c2c4d38559dc54e2a0ee2b405ff4f3b88e20d926\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:15:18.705750 kubelet[3483]: E0715 23:15:18.705691 3483 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbc91cb1248140b94f986888c2c4d38559dc54e2a0ee2b405ff4f3b88e20d926\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:15:18.705863 kubelet[3483]: E0715 23:15:18.705749 3483 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbc91cb1248140b94f986888c2c4d38559dc54e2a0ee2b405ff4f3b88e20d926\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69f8c8b5f9-754zd" Jul 15 23:15:18.705863 kubelet[3483]: E0715 23:15:18.705775 3483 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbc91cb1248140b94f986888c2c4d38559dc54e2a0ee2b405ff4f3b88e20d926\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69f8c8b5f9-754zd" Jul 15 23:15:18.705863 kubelet[3483]: E0715 23:15:18.705818 3483 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-69f8c8b5f9-754zd_calico-system(c83f766e-eaa7-4861-9802-86c69a17d315)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-69f8c8b5f9-754zd_calico-system(c83f766e-eaa7-4861-9802-86c69a17d315)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fbc91cb1248140b94f986888c2c4d38559dc54e2a0ee2b405ff4f3b88e20d926\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69f8c8b5f9-754zd" podUID="c83f766e-eaa7-4861-9802-86c69a17d315" Jul 15 23:15:18.720140 containerd[1886]: time="2025-07-15T23:15:18.720059912Z" level=error msg="Failed to destroy network for sandbox \"c5e717305fb2e508066d34d56bb9639e72925e15b201a26fd824ac17914da529\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:15:18.725603 containerd[1886]: time="2025-07-15T23:15:18.725546344Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-97d7l,Uid:14848f7b-7d37-411e-b6ae-df88a913bfb4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5e717305fb2e508066d34d56bb9639e72925e15b201a26fd824ac17914da529\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:15:18.725851 kubelet[3483]: E0715 23:15:18.725780 3483 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5e717305fb2e508066d34d56bb9639e72925e15b201a26fd824ac17914da529\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:15:18.725851 kubelet[3483]: E0715 23:15:18.725835 3483 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5e717305fb2e508066d34d56bb9639e72925e15b201a26fd824ac17914da529\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-97d7l" Jul 15 23:15:18.725944 kubelet[3483]: E0715 23:15:18.725852 3483 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5e717305fb2e508066d34d56bb9639e72925e15b201a26fd824ac17914da529\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-97d7l" Jul 15 23:15:18.725944 kubelet[3483]: E0715 23:15:18.725910 3483 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-97d7l_calico-system(14848f7b-7d37-411e-b6ae-df88a913bfb4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-97d7l_calico-system(14848f7b-7d37-411e-b6ae-df88a913bfb4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5e717305fb2e508066d34d56bb9639e72925e15b201a26fd824ac17914da529\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-97d7l" podUID="14848f7b-7d37-411e-b6ae-df88a913bfb4" Jul 15 23:15:18.739120 containerd[1886]: time="2025-07-15T23:15:18.738979062Z" level=error msg="Failed to destroy network for sandbox \"09606c237641c45e9ad7a4059f75ad8dfdcb4fb47b4234a5cedb59734478212f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:15:18.743438 containerd[1886]: time="2025-07-15T23:15:18.743373472Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8bccd948d-vhplr,Uid:774e156b-11dc-4cb5-ba54-e8738bfac49c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"09606c237641c45e9ad7a4059f75ad8dfdcb4fb47b4234a5cedb59734478212f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:15:18.744189 kubelet[3483]: E0715 23:15:18.743784 3483 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09606c237641c45e9ad7a4059f75ad8dfdcb4fb47b4234a5cedb59734478212f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:15:18.744189 kubelet[3483]: E0715 23:15:18.743844 3483 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09606c237641c45e9ad7a4059f75ad8dfdcb4fb47b4234a5cedb59734478212f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8bccd948d-vhplr" Jul 15 23:15:18.744189 kubelet[3483]: E0715 23:15:18.743882 3483 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09606c237641c45e9ad7a4059f75ad8dfdcb4fb47b4234a5cedb59734478212f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8bccd948d-vhplr" Jul 15 23:15:18.744393 kubelet[3483]: E0715 23:15:18.743931 3483 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-8bccd948d-vhplr_calico-system(774e156b-11dc-4cb5-ba54-e8738bfac49c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-8bccd948d-vhplr_calico-system(774e156b-11dc-4cb5-ba54-e8738bfac49c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"09606c237641c45e9ad7a4059f75ad8dfdcb4fb47b4234a5cedb59734478212f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8bccd948d-vhplr" podUID="774e156b-11dc-4cb5-ba54-e8738bfac49c" Jul 15 23:15:18.746944 containerd[1886]: time="2025-07-15T23:15:18.746902803Z" level=error msg="Failed to destroy network for sandbox \"806568bc1259056e77e21629c298c94806f005c9e31cf45edddcc7c449a39401\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:15:18.748196 containerd[1886]: time="2025-07-15T23:15:18.747986057Z" level=error msg="Failed to destroy network for sandbox \"27c31c6c5c6e14dac3d01f6ea47d0c9d3f58ce72d02ae559a5ec8d4c2f1563cb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:15:18.753850 containerd[1886]: time="2025-07-15T23:15:18.753808267Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5684c645d7-ljgbp,Uid:b8795292-5562-4461-a9bd-45b3ce7229be,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"806568bc1259056e77e21629c298c94806f005c9e31cf45edddcc7c449a39401\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:15:18.754058 kubelet[3483]: E0715 23:15:18.754004 3483 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"806568bc1259056e77e21629c298c94806f005c9e31cf45edddcc7c449a39401\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:15:18.754058 kubelet[3483]: E0715 23:15:18.754053 3483 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"806568bc1259056e77e21629c298c94806f005c9e31cf45edddcc7c449a39401\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5684c645d7-ljgbp" Jul 15 23:15:18.754129 kubelet[3483]: E0715 23:15:18.754069 3483 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"806568bc1259056e77e21629c298c94806f005c9e31cf45edddcc7c449a39401\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5684c645d7-ljgbp" Jul 15 23:15:18.754190 kubelet[3483]: E0715 23:15:18.754158 3483 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5684c645d7-ljgbp_calico-apiserver(b8795292-5562-4461-a9bd-45b3ce7229be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5684c645d7-ljgbp_calico-apiserver(b8795292-5562-4461-a9bd-45b3ce7229be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"806568bc1259056e77e21629c298c94806f005c9e31cf45edddcc7c449a39401\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5684c645d7-ljgbp" podUID="b8795292-5562-4461-a9bd-45b3ce7229be" Jul 15 23:15:18.761847 containerd[1886]: time="2025-07-15T23:15:18.761797913Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5684c645d7-7hkgz,Uid:35baaac4-a554-4613-a44f-3a7d53ede3f7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"27c31c6c5c6e14dac3d01f6ea47d0c9d3f58ce72d02ae559a5ec8d4c2f1563cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:15:18.762257 kubelet[3483]: E0715 23:15:18.762222 3483 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27c31c6c5c6e14dac3d01f6ea47d0c9d3f58ce72d02ae559a5ec8d4c2f1563cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:15:18.762367 kubelet[3483]: E0715 23:15:18.762297 3483 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27c31c6c5c6e14dac3d01f6ea47d0c9d3f58ce72d02ae559a5ec8d4c2f1563cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5684c645d7-7hkgz" Jul 15 23:15:18.762367 kubelet[3483]: E0715 23:15:18.762315 3483 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27c31c6c5c6e14dac3d01f6ea47d0c9d3f58ce72d02ae559a5ec8d4c2f1563cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5684c645d7-7hkgz" Jul 15 23:15:18.762407 kubelet[3483]: E0715 23:15:18.762365 3483 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5684c645d7-7hkgz_calico-apiserver(35baaac4-a554-4613-a44f-3a7d53ede3f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5684c645d7-7hkgz_calico-apiserver(35baaac4-a554-4613-a44f-3a7d53ede3f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"27c31c6c5c6e14dac3d01f6ea47d0c9d3f58ce72d02ae559a5ec8d4c2f1563cb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5684c645d7-7hkgz" podUID="35baaac4-a554-4613-a44f-3a7d53ede3f7" Jul 15 23:15:19.068824 containerd[1886]: time="2025-07-15T23:15:19.068760767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 15 23:15:22.912644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1509082927.mount: Deactivated successfully. Jul 15 23:15:23.189659 containerd[1886]: time="2025-07-15T23:15:23.189434415Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:15:23.193437 containerd[1886]: time="2025-07-15T23:15:23.193383101Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 15 23:15:23.197190 containerd[1886]: time="2025-07-15T23:15:23.197117029Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:15:23.201756 containerd[1886]: time="2025-07-15T23:15:23.201694244Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:15:23.202297 containerd[1886]: time="2025-07-15T23:15:23.202016789Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 4.133215037s" Jul 15 23:15:23.202297 containerd[1886]: time="2025-07-15T23:15:23.202048766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 15 23:15:23.223615 containerd[1886]: time="2025-07-15T23:15:23.223572541Z" level=info msg="CreateContainer within sandbox \"7e78f10e50a411ccffdd24cb0aa745731c09a21cef5d203311b2de93d72a99cb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 15 23:15:23.254015 containerd[1886]: time="2025-07-15T23:15:23.253967955Z" level=info msg="Container e59d943a027a624d171db33e28ef2145c77369c57294f4af647a3df75f8627dc: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:15:23.276152 containerd[1886]: time="2025-07-15T23:15:23.276083874Z" level=info msg="CreateContainer within sandbox \"7e78f10e50a411ccffdd24cb0aa745731c09a21cef5d203311b2de93d72a99cb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e59d943a027a624d171db33e28ef2145c77369c57294f4af647a3df75f8627dc\"" Jul 15 23:15:23.276872 containerd[1886]: time="2025-07-15T23:15:23.276761573Z" level=info msg="StartContainer for \"e59d943a027a624d171db33e28ef2145c77369c57294f4af647a3df75f8627dc\"" Jul 15 23:15:23.278528 containerd[1886]: time="2025-07-15T23:15:23.278500981Z" level=info msg="connecting to shim e59d943a027a624d171db33e28ef2145c77369c57294f4af647a3df75f8627dc" address="unix:///run/containerd/s/70098d13e5c23dc5b7425d8c39e301c1e8924d041487634afe1601ffb4a4a90a" protocol=ttrpc version=3 Jul 15 23:15:23.298450 systemd[1]: Started cri-containerd-e59d943a027a624d171db33e28ef2145c77369c57294f4af647a3df75f8627dc.scope - libcontainer container e59d943a027a624d171db33e28ef2145c77369c57294f4af647a3df75f8627dc. Jul 15 23:15:23.338202 containerd[1886]: time="2025-07-15T23:15:23.338157465Z" level=info msg="StartContainer for \"e59d943a027a624d171db33e28ef2145c77369c57294f4af647a3df75f8627dc\" returns successfully" Jul 15 23:15:23.609231 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 15 23:15:23.609433 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 15 23:15:23.786304 kubelet[3483]: I0715 23:15:23.784973 3483 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vxhh\" (UniqueName: \"kubernetes.io/projected/774e156b-11dc-4cb5-ba54-e8738bfac49c-kube-api-access-7vxhh\") pod \"774e156b-11dc-4cb5-ba54-e8738bfac49c\" (UID: \"774e156b-11dc-4cb5-ba54-e8738bfac49c\") " Jul 15 23:15:23.786304 kubelet[3483]: I0715 23:15:23.785061 3483 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/774e156b-11dc-4cb5-ba54-e8738bfac49c-whisker-backend-key-pair\") pod \"774e156b-11dc-4cb5-ba54-e8738bfac49c\" (UID: \"774e156b-11dc-4cb5-ba54-e8738bfac49c\") " Jul 15 23:15:23.786304 kubelet[3483]: I0715 23:15:23.785095 3483 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/774e156b-11dc-4cb5-ba54-e8738bfac49c-whisker-ca-bundle\") pod \"774e156b-11dc-4cb5-ba54-e8738bfac49c\" (UID: \"774e156b-11dc-4cb5-ba54-e8738bfac49c\") " Jul 15 23:15:23.786304 kubelet[3483]: I0715 23:15:23.785414 3483 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/774e156b-11dc-4cb5-ba54-e8738bfac49c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "774e156b-11dc-4cb5-ba54-e8738bfac49c" (UID: "774e156b-11dc-4cb5-ba54-e8738bfac49c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 15 23:15:23.786814 kubelet[3483]: I0715 23:15:23.786784 3483 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/774e156b-11dc-4cb5-ba54-e8738bfac49c-kube-api-access-7vxhh" (OuterVolumeSpecName: "kube-api-access-7vxhh") pod "774e156b-11dc-4cb5-ba54-e8738bfac49c" (UID: "774e156b-11dc-4cb5-ba54-e8738bfac49c"). InnerVolumeSpecName "kube-api-access-7vxhh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 23:15:23.788299 kubelet[3483]: I0715 23:15:23.788157 3483 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/774e156b-11dc-4cb5-ba54-e8738bfac49c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "774e156b-11dc-4cb5-ba54-e8738bfac49c" (UID: "774e156b-11dc-4cb5-ba54-e8738bfac49c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 15 23:15:23.885684 kubelet[3483]: I0715 23:15:23.885535 3483 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/774e156b-11dc-4cb5-ba54-e8738bfac49c-whisker-ca-bundle\") on node \"ci-4372.0.1-n-7d7ad51cdd\" DevicePath \"\"" Jul 15 23:15:23.885684 kubelet[3483]: I0715 23:15:23.885573 3483 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7vxhh\" (UniqueName: \"kubernetes.io/projected/774e156b-11dc-4cb5-ba54-e8738bfac49c-kube-api-access-7vxhh\") on node \"ci-4372.0.1-n-7d7ad51cdd\" DevicePath \"\"" Jul 15 23:15:23.885684 kubelet[3483]: I0715 23:15:23.885584 3483 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/774e156b-11dc-4cb5-ba54-e8738bfac49c-whisker-backend-key-pair\") on node \"ci-4372.0.1-n-7d7ad51cdd\" DevicePath \"\"" Jul 15 23:15:23.913481 systemd[1]: var-lib-kubelet-pods-774e156b\x2d11dc\x2d4cb5\x2dba54\x2de8738bfac49c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7vxhh.mount: Deactivated successfully. Jul 15 23:15:23.913567 systemd[1]: var-lib-kubelet-pods-774e156b\x2d11dc\x2d4cb5\x2dba54\x2de8738bfac49c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 15 23:15:24.088030 systemd[1]: Removed slice kubepods-besteffort-pod774e156b_11dc_4cb5_ba54_e8738bfac49c.slice - libcontainer container kubepods-besteffort-pod774e156b_11dc_4cb5_ba54_e8738bfac49c.slice. Jul 15 23:15:24.116980 kubelet[3483]: I0715 23:15:24.116909 3483 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tjlt8" podStartSLOduration=1.394613734 podStartE2EDuration="15.116889937s" podCreationTimestamp="2025-07-15 23:15:09 +0000 UTC" firstStartedPulling="2025-07-15 23:15:09.480484271 +0000 UTC m=+20.608013331" lastFinishedPulling="2025-07-15 23:15:23.202760474 +0000 UTC m=+34.330289534" observedRunningTime="2025-07-15 23:15:24.10400069 +0000 UTC m=+35.231529750" watchObservedRunningTime="2025-07-15 23:15:24.116889937 +0000 UTC m=+35.244418997" Jul 15 23:15:24.186507 systemd[1]: Created slice kubepods-besteffort-pod13b5af37_b74d_414e_bf25_81730d47a5d3.slice - libcontainer container kubepods-besteffort-pod13b5af37_b74d_414e_bf25_81730d47a5d3.slice. Jul 15 23:15:24.289414 kubelet[3483]: I0715 23:15:24.289359 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwq6l\" (UniqueName: \"kubernetes.io/projected/13b5af37-b74d-414e-bf25-81730d47a5d3-kube-api-access-xwq6l\") pod \"whisker-6c586f6bb8-hb7d6\" (UID: \"13b5af37-b74d-414e-bf25-81730d47a5d3\") " pod="calico-system/whisker-6c586f6bb8-hb7d6" Jul 15 23:15:24.289717 kubelet[3483]: I0715 23:15:24.289503 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13b5af37-b74d-414e-bf25-81730d47a5d3-whisker-ca-bundle\") pod \"whisker-6c586f6bb8-hb7d6\" (UID: \"13b5af37-b74d-414e-bf25-81730d47a5d3\") " pod="calico-system/whisker-6c586f6bb8-hb7d6" Jul 15 23:15:24.289717 kubelet[3483]: I0715 23:15:24.289521 3483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/13b5af37-b74d-414e-bf25-81730d47a5d3-whisker-backend-key-pair\") pod \"whisker-6c586f6bb8-hb7d6\" (UID: \"13b5af37-b74d-414e-bf25-81730d47a5d3\") " pod="calico-system/whisker-6c586f6bb8-hb7d6" Jul 15 23:15:24.498922 containerd[1886]: time="2025-07-15T23:15:24.498862470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c586f6bb8-hb7d6,Uid:13b5af37-b74d-414e-bf25-81730d47a5d3,Namespace:calico-system,Attempt:0,}" Jul 15 23:15:24.607488 systemd-networkd[1577]: cali45a1fdc10f3: Link UP Jul 15 23:15:24.609095 systemd-networkd[1577]: cali45a1fdc10f3: Gained carrier Jul 15 23:15:24.623799 containerd[1886]: 2025-07-15 23:15:24.528 [INFO][4534] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 23:15:24.623799 containerd[1886]: 2025-07-15 23:15:24.542 [INFO][4534] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--n--7d7ad51cdd-k8s-whisker--6c586f6bb8--hb7d6-eth0 whisker-6c586f6bb8- calico-system 13b5af37-b74d-414e-bf25-81730d47a5d3 871 0 2025-07-15 23:15:24 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6c586f6bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4372.0.1-n-7d7ad51cdd whisker-6c586f6bb8-hb7d6 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali45a1fdc10f3 [] [] }} ContainerID="c3f2c4ff5f0e205d62a80dbe0546507c6e045257597aaee7f7201735d3861333" Namespace="calico-system" Pod="whisker-6c586f6bb8-hb7d6" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-whisker--6c586f6bb8--hb7d6-" Jul 15 23:15:24.623799 containerd[1886]: 2025-07-15 23:15:24.542 [INFO][4534] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c3f2c4ff5f0e205d62a80dbe0546507c6e045257597aaee7f7201735d3861333" Namespace="calico-system" Pod="whisker-6c586f6bb8-hb7d6" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-whisker--6c586f6bb8--hb7d6-eth0" Jul 15 23:15:24.623799 containerd[1886]: 2025-07-15 23:15:24.563 [INFO][4547] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c3f2c4ff5f0e205d62a80dbe0546507c6e045257597aaee7f7201735d3861333" HandleID="k8s-pod-network.c3f2c4ff5f0e205d62a80dbe0546507c6e045257597aaee7f7201735d3861333" Workload="ci--4372.0.1--n--7d7ad51cdd-k8s-whisker--6c586f6bb8--hb7d6-eth0" Jul 15 23:15:24.624185 containerd[1886]: 2025-07-15 23:15:24.563 [INFO][4547] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c3f2c4ff5f0e205d62a80dbe0546507c6e045257597aaee7f7201735d3861333" HandleID="k8s-pod-network.c3f2c4ff5f0e205d62a80dbe0546507c6e045257597aaee7f7201735d3861333" Workload="ci--4372.0.1--n--7d7ad51cdd-k8s-whisker--6c586f6bb8--hb7d6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b820), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.1-n-7d7ad51cdd", "pod":"whisker-6c586f6bb8-hb7d6", "timestamp":"2025-07-15 23:15:24.563021927 +0000 UTC"}, Hostname:"ci-4372.0.1-n-7d7ad51cdd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:15:24.624185 containerd[1886]: 2025-07-15 23:15:24.563 [INFO][4547] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:15:24.624185 containerd[1886]: 2025-07-15 23:15:24.563 [INFO][4547] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:15:24.624185 containerd[1886]: 2025-07-15 23:15:24.563 [INFO][4547] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-n-7d7ad51cdd' Jul 15 23:15:24.624185 containerd[1886]: 2025-07-15 23:15:24.571 [INFO][4547] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c3f2c4ff5f0e205d62a80dbe0546507c6e045257597aaee7f7201735d3861333" host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:24.624185 containerd[1886]: 2025-07-15 23:15:24.576 [INFO][4547] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:24.624185 containerd[1886]: 2025-07-15 23:15:24.580 [INFO][4547] ipam/ipam.go 511: Trying affinity for 192.168.26.192/26 host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:24.624185 containerd[1886]: 2025-07-15 23:15:24.581 [INFO][4547] ipam/ipam.go 158: Attempting to load block cidr=192.168.26.192/26 host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:24.624185 containerd[1886]: 2025-07-15 23:15:24.583 [INFO][4547] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.26.192/26 host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:24.624424 containerd[1886]: 2025-07-15 23:15:24.583 [INFO][4547] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.26.192/26 handle="k8s-pod-network.c3f2c4ff5f0e205d62a80dbe0546507c6e045257597aaee7f7201735d3861333" host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:24.624424 containerd[1886]: 2025-07-15 23:15:24.585 [INFO][4547] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c3f2c4ff5f0e205d62a80dbe0546507c6e045257597aaee7f7201735d3861333 Jul 15 23:15:24.624424 containerd[1886]: 2025-07-15 23:15:24.590 [INFO][4547] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.26.192/26 handle="k8s-pod-network.c3f2c4ff5f0e205d62a80dbe0546507c6e045257597aaee7f7201735d3861333" host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:24.624424 containerd[1886]: 2025-07-15 23:15:24.599 [INFO][4547] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.26.193/26] block=192.168.26.192/26 handle="k8s-pod-network.c3f2c4ff5f0e205d62a80dbe0546507c6e045257597aaee7f7201735d3861333" host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:24.624424 containerd[1886]: 2025-07-15 23:15:24.599 [INFO][4547] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.26.193/26] handle="k8s-pod-network.c3f2c4ff5f0e205d62a80dbe0546507c6e045257597aaee7f7201735d3861333" host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:24.624424 containerd[1886]: 2025-07-15 23:15:24.599 [INFO][4547] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:15:24.624424 containerd[1886]: 2025-07-15 23:15:24.599 [INFO][4547] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.26.193/26] IPv6=[] ContainerID="c3f2c4ff5f0e205d62a80dbe0546507c6e045257597aaee7f7201735d3861333" HandleID="k8s-pod-network.c3f2c4ff5f0e205d62a80dbe0546507c6e045257597aaee7f7201735d3861333" Workload="ci--4372.0.1--n--7d7ad51cdd-k8s-whisker--6c586f6bb8--hb7d6-eth0" Jul 15 23:15:24.624537 containerd[1886]: 2025-07-15 23:15:24.602 [INFO][4534] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c3f2c4ff5f0e205d62a80dbe0546507c6e045257597aaee7f7201735d3861333" Namespace="calico-system" Pod="whisker-6c586f6bb8-hb7d6" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-whisker--6c586f6bb8--hb7d6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--7d7ad51cdd-k8s-whisker--6c586f6bb8--hb7d6-eth0", GenerateName:"whisker-6c586f6bb8-", Namespace:"calico-system", SelfLink:"", UID:"13b5af37-b74d-414e-bf25-81730d47a5d3", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 15, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c586f6bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-7d7ad51cdd", ContainerID:"", Pod:"whisker-6c586f6bb8-hb7d6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.26.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali45a1fdc10f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:15:24.624537 containerd[1886]: 2025-07-15 23:15:24.602 [INFO][4534] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.193/32] ContainerID="c3f2c4ff5f0e205d62a80dbe0546507c6e045257597aaee7f7201735d3861333" Namespace="calico-system" Pod="whisker-6c586f6bb8-hb7d6" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-whisker--6c586f6bb8--hb7d6-eth0" Jul 15 23:15:24.624587 containerd[1886]: 2025-07-15 23:15:24.602 [INFO][4534] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali45a1fdc10f3 ContainerID="c3f2c4ff5f0e205d62a80dbe0546507c6e045257597aaee7f7201735d3861333" Namespace="calico-system" Pod="whisker-6c586f6bb8-hb7d6" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-whisker--6c586f6bb8--hb7d6-eth0" Jul 15 23:15:24.624587 containerd[1886]: 2025-07-15 23:15:24.608 [INFO][4534] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c3f2c4ff5f0e205d62a80dbe0546507c6e045257597aaee7f7201735d3861333" Namespace="calico-system" Pod="whisker-6c586f6bb8-hb7d6" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-whisker--6c586f6bb8--hb7d6-eth0" Jul 15 23:15:24.624646 containerd[1886]: 2025-07-15 23:15:24.609 [INFO][4534] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c3f2c4ff5f0e205d62a80dbe0546507c6e045257597aaee7f7201735d3861333" Namespace="calico-system" Pod="whisker-6c586f6bb8-hb7d6" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-whisker--6c586f6bb8--hb7d6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--7d7ad51cdd-k8s-whisker--6c586f6bb8--hb7d6-eth0", GenerateName:"whisker-6c586f6bb8-", Namespace:"calico-system", SelfLink:"", UID:"13b5af37-b74d-414e-bf25-81730d47a5d3", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 15, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c586f6bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-7d7ad51cdd", ContainerID:"c3f2c4ff5f0e205d62a80dbe0546507c6e045257597aaee7f7201735d3861333", Pod:"whisker-6c586f6bb8-hb7d6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.26.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali45a1fdc10f3", MAC:"e2:c1:75:c3:d2:82", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:15:24.624716 containerd[1886]: 2025-07-15 23:15:24.622 [INFO][4534] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c3f2c4ff5f0e205d62a80dbe0546507c6e045257597aaee7f7201735d3861333" Namespace="calico-system" Pod="whisker-6c586f6bb8-hb7d6" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-whisker--6c586f6bb8--hb7d6-eth0" Jul 15 23:15:24.682911 containerd[1886]: time="2025-07-15T23:15:24.682870911Z" level=info msg="connecting to shim c3f2c4ff5f0e205d62a80dbe0546507c6e045257597aaee7f7201735d3861333" address="unix:///run/containerd/s/8e8bc3d6ce0b6f2ae8938d31071d69bfe910b13fb7b985554c03fd37e817db16" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:15:24.699426 systemd[1]: Started cri-containerd-c3f2c4ff5f0e205d62a80dbe0546507c6e045257597aaee7f7201735d3861333.scope - libcontainer container c3f2c4ff5f0e205d62a80dbe0546507c6e045257597aaee7f7201735d3861333. Jul 15 23:15:24.730067 containerd[1886]: time="2025-07-15T23:15:24.730017391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c586f6bb8-hb7d6,Uid:13b5af37-b74d-414e-bf25-81730d47a5d3,Namespace:calico-system,Attempt:0,} returns sandbox id \"c3f2c4ff5f0e205d62a80dbe0546507c6e045257597aaee7f7201735d3861333\"" Jul 15 23:15:24.731659 containerd[1886]: time="2025-07-15T23:15:24.731628476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 15 23:15:24.955160 kubelet[3483]: I0715 23:15:24.955044 3483 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="774e156b-11dc-4cb5-ba54-e8738bfac49c" path="/var/lib/kubelet/pods/774e156b-11dc-4cb5-ba54-e8738bfac49c/volumes" Jul 15 23:15:25.905326 containerd[1886]: time="2025-07-15T23:15:25.905105896Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:15:25.907298 containerd[1886]: time="2025-07-15T23:15:25.907258123Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 15 23:15:25.911117 containerd[1886]: time="2025-07-15T23:15:25.911053067Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:15:25.918148 containerd[1886]: time="2025-07-15T23:15:25.918113676Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:15:25.918558 containerd[1886]: time="2025-07-15T23:15:25.918531448Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.186870428s" Jul 15 23:15:25.918636 containerd[1886]: time="2025-07-15T23:15:25.918562801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 15 23:15:25.924846 containerd[1886]: time="2025-07-15T23:15:25.924811324Z" level=info msg="CreateContainer within sandbox \"c3f2c4ff5f0e205d62a80dbe0546507c6e045257597aaee7f7201735d3861333\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 15 23:15:25.956852 containerd[1886]: time="2025-07-15T23:15:25.956801177Z" level=info msg="Container d9c4965c0b132e865f4dfbcff4ccdbc2354685ce8ab88b359e508c7bdf713cf3: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:15:25.977823 containerd[1886]: time="2025-07-15T23:15:25.977770024Z" level=info msg="CreateContainer within sandbox \"c3f2c4ff5f0e205d62a80dbe0546507c6e045257597aaee7f7201735d3861333\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"d9c4965c0b132e865f4dfbcff4ccdbc2354685ce8ab88b359e508c7bdf713cf3\"" Jul 15 23:15:25.978447 containerd[1886]: time="2025-07-15T23:15:25.978390193Z" level=info msg="StartContainer for \"d9c4965c0b132e865f4dfbcff4ccdbc2354685ce8ab88b359e508c7bdf713cf3\"" Jul 15 23:15:25.979872 containerd[1886]: time="2025-07-15T23:15:25.979680116Z" level=info msg="connecting to shim d9c4965c0b132e865f4dfbcff4ccdbc2354685ce8ab88b359e508c7bdf713cf3" address="unix:///run/containerd/s/8e8bc3d6ce0b6f2ae8938d31071d69bfe910b13fb7b985554c03fd37e817db16" protocol=ttrpc version=3 Jul 15 23:15:25.999465 systemd[1]: Started cri-containerd-d9c4965c0b132e865f4dfbcff4ccdbc2354685ce8ab88b359e508c7bdf713cf3.scope - libcontainer container d9c4965c0b132e865f4dfbcff4ccdbc2354685ce8ab88b359e508c7bdf713cf3. Jul 15 23:15:26.041712 containerd[1886]: time="2025-07-15T23:15:26.041675344Z" level=info msg="StartContainer for \"d9c4965c0b132e865f4dfbcff4ccdbc2354685ce8ab88b359e508c7bdf713cf3\" returns successfully" Jul 15 23:15:26.044565 containerd[1886]: time="2025-07-15T23:15:26.044527222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 15 23:15:26.353492 systemd-networkd[1577]: cali45a1fdc10f3: Gained IPv6LL Jul 15 23:15:27.059354 kubelet[3483]: I0715 23:15:27.059217 3483 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:15:27.818757 systemd-networkd[1577]: vxlan.calico: Link UP Jul 15 23:15:27.818764 systemd-networkd[1577]: vxlan.calico: Gained carrier Jul 15 23:15:28.130522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3774455546.mount: Deactivated successfully. Jul 15 23:15:28.747453 containerd[1886]: time="2025-07-15T23:15:28.747303277Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:15:28.749970 containerd[1886]: time="2025-07-15T23:15:28.749936661Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 15 23:15:28.753127 containerd[1886]: time="2025-07-15T23:15:28.753083203Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:15:28.757881 containerd[1886]: time="2025-07-15T23:15:28.757825277Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:15:28.758396 containerd[1886]: time="2025-07-15T23:15:28.758169407Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 2.713608545s" Jul 15 23:15:28.758396 containerd[1886]: time="2025-07-15T23:15:28.758198976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 15 23:15:28.765952 containerd[1886]: time="2025-07-15T23:15:28.765909171Z" level=info msg="CreateContainer within sandbox \"c3f2c4ff5f0e205d62a80dbe0546507c6e045257597aaee7f7201735d3861333\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 15 23:15:28.794217 containerd[1886]: time="2025-07-15T23:15:28.794166274Z" level=info msg="Container b55169ac3b956bde98ec7db311186e93c9e3c4336313c54ff1659bd0ae112b92: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:15:28.815888 containerd[1886]: time="2025-07-15T23:15:28.815842700Z" level=info msg="CreateContainer within sandbox \"c3f2c4ff5f0e205d62a80dbe0546507c6e045257597aaee7f7201735d3861333\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"b55169ac3b956bde98ec7db311186e93c9e3c4336313c54ff1659bd0ae112b92\"" Jul 15 23:15:28.816526 containerd[1886]: time="2025-07-15T23:15:28.816401283Z" level=info msg="StartContainer for \"b55169ac3b956bde98ec7db311186e93c9e3c4336313c54ff1659bd0ae112b92\"" Jul 15 23:15:28.818338 containerd[1886]: time="2025-07-15T23:15:28.818311824Z" level=info msg="connecting to shim b55169ac3b956bde98ec7db311186e93c9e3c4336313c54ff1659bd0ae112b92" address="unix:///run/containerd/s/8e8bc3d6ce0b6f2ae8938d31071d69bfe910b13fb7b985554c03fd37e817db16" protocol=ttrpc version=3 Jul 15 23:15:28.838454 systemd[1]: Started cri-containerd-b55169ac3b956bde98ec7db311186e93c9e3c4336313c54ff1659bd0ae112b92.scope - libcontainer container b55169ac3b956bde98ec7db311186e93c9e3c4336313c54ff1659bd0ae112b92. Jul 15 23:15:28.869498 containerd[1886]: time="2025-07-15T23:15:28.869403432Z" level=info msg="StartContainer for \"b55169ac3b956bde98ec7db311186e93c9e3c4336313c54ff1659bd0ae112b92\" returns successfully" Jul 15 23:15:29.617577 systemd-networkd[1577]: vxlan.calico: Gained IPv6LL Jul 15 23:15:29.954321 containerd[1886]: time="2025-07-15T23:15:29.953966545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-97d7l,Uid:14848f7b-7d37-411e-b6ae-df88a913bfb4,Namespace:calico-system,Attempt:0,}" Jul 15 23:15:29.954903 containerd[1886]: time="2025-07-15T23:15:29.954617787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69f8c8b5f9-754zd,Uid:c83f766e-eaa7-4861-9802-86c69a17d315,Namespace:calico-system,Attempt:0,}" Jul 15 23:15:29.954903 containerd[1886]: time="2025-07-15T23:15:29.954826313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dhhj5,Uid:ebad4995-7e5b-4942-987a-9ae9ac290621,Namespace:calico-system,Attempt:0,}" Jul 15 23:15:30.109887 systemd-networkd[1577]: cali46e0681e44a: Link UP Jul 15 23:15:30.110261 systemd-networkd[1577]: cali46e0681e44a: Gained carrier Jul 15 23:15:30.130347 kubelet[3483]: I0715 23:15:30.129868 3483 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6c586f6bb8-hb7d6" podStartSLOduration=2.102125283 podStartE2EDuration="6.129849175s" podCreationTimestamp="2025-07-15 23:15:24 +0000 UTC" firstStartedPulling="2025-07-15 23:15:24.731456399 +0000 UTC m=+35.858985459" lastFinishedPulling="2025-07-15 23:15:28.759180251 +0000 UTC m=+39.886709351" observedRunningTime="2025-07-15 23:15:29.136050541 +0000 UTC m=+40.263579609" watchObservedRunningTime="2025-07-15 23:15:30.129849175 +0000 UTC m=+41.257378235" Jul 15 23:15:30.132495 containerd[1886]: 2025-07-15 23:15:30.008 [INFO][4940] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--n--7d7ad51cdd-k8s-goldmane--768f4c5c69--97d7l-eth0 goldmane-768f4c5c69- calico-system 14848f7b-7d37-411e-b6ae-df88a913bfb4 807 0 2025-07-15 23:15:09 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4372.0.1-n-7d7ad51cdd goldmane-768f4c5c69-97d7l eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali46e0681e44a [] [] }} ContainerID="94c192a90bc65fb721c59f4a6372aeaba504b9654db44d43a8b8aee27be828c0" Namespace="calico-system" Pod="goldmane-768f4c5c69-97d7l" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-goldmane--768f4c5c69--97d7l-" Jul 15 23:15:30.132495 containerd[1886]: 2025-07-15 23:15:30.010 [INFO][4940] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="94c192a90bc65fb721c59f4a6372aeaba504b9654db44d43a8b8aee27be828c0" Namespace="calico-system" Pod="goldmane-768f4c5c69-97d7l" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-goldmane--768f4c5c69--97d7l-eth0" Jul 15 23:15:30.132495 containerd[1886]: 2025-07-15 23:15:30.050 [INFO][4976] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="94c192a90bc65fb721c59f4a6372aeaba504b9654db44d43a8b8aee27be828c0" HandleID="k8s-pod-network.94c192a90bc65fb721c59f4a6372aeaba504b9654db44d43a8b8aee27be828c0" Workload="ci--4372.0.1--n--7d7ad51cdd-k8s-goldmane--768f4c5c69--97d7l-eth0" Jul 15 23:15:30.132678 containerd[1886]: 2025-07-15 23:15:30.050 [INFO][4976] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="94c192a90bc65fb721c59f4a6372aeaba504b9654db44d43a8b8aee27be828c0" HandleID="k8s-pod-network.94c192a90bc65fb721c59f4a6372aeaba504b9654db44d43a8b8aee27be828c0" Workload="ci--4372.0.1--n--7d7ad51cdd-k8s-goldmane--768f4c5c69--97d7l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002caff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.1-n-7d7ad51cdd", "pod":"goldmane-768f4c5c69-97d7l", "timestamp":"2025-07-15 23:15:30.050460423 +0000 UTC"}, Hostname:"ci-4372.0.1-n-7d7ad51cdd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:15:30.132678 containerd[1886]: 2025-07-15 23:15:30.050 [INFO][4976] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:15:30.132678 containerd[1886]: 2025-07-15 23:15:30.050 [INFO][4976] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:15:30.132678 containerd[1886]: 2025-07-15 23:15:30.050 [INFO][4976] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-n-7d7ad51cdd' Jul 15 23:15:30.132678 containerd[1886]: 2025-07-15 23:15:30.062 [INFO][4976] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.94c192a90bc65fb721c59f4a6372aeaba504b9654db44d43a8b8aee27be828c0" host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:30.132678 containerd[1886]: 2025-07-15 23:15:30.069 [INFO][4976] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:30.132678 containerd[1886]: 2025-07-15 23:15:30.075 [INFO][4976] ipam/ipam.go 511: Trying affinity for 192.168.26.192/26 host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:30.132678 containerd[1886]: 2025-07-15 23:15:30.077 [INFO][4976] ipam/ipam.go 158: Attempting to load block cidr=192.168.26.192/26 host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:30.132678 containerd[1886]: 2025-07-15 23:15:30.079 [INFO][4976] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.26.192/26 host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:30.132829 containerd[1886]: 2025-07-15 23:15:30.079 [INFO][4976] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.26.192/26 handle="k8s-pod-network.94c192a90bc65fb721c59f4a6372aeaba504b9654db44d43a8b8aee27be828c0" host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:30.132829 containerd[1886]: 2025-07-15 23:15:30.081 [INFO][4976] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.94c192a90bc65fb721c59f4a6372aeaba504b9654db44d43a8b8aee27be828c0 Jul 15 23:15:30.132829 containerd[1886]: 2025-07-15 23:15:30.086 [INFO][4976] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.26.192/26 handle="k8s-pod-network.94c192a90bc65fb721c59f4a6372aeaba504b9654db44d43a8b8aee27be828c0" host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:30.132829 containerd[1886]: 2025-07-15 23:15:30.100 [INFO][4976] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.26.194/26] block=192.168.26.192/26 handle="k8s-pod-network.94c192a90bc65fb721c59f4a6372aeaba504b9654db44d43a8b8aee27be828c0" host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:30.132829 containerd[1886]: 2025-07-15 23:15:30.101 [INFO][4976] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.26.194/26] handle="k8s-pod-network.94c192a90bc65fb721c59f4a6372aeaba504b9654db44d43a8b8aee27be828c0" host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:30.132829 containerd[1886]: 2025-07-15 23:15:30.101 [INFO][4976] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:15:30.132829 containerd[1886]: 2025-07-15 23:15:30.101 [INFO][4976] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.26.194/26] IPv6=[] ContainerID="94c192a90bc65fb721c59f4a6372aeaba504b9654db44d43a8b8aee27be828c0" HandleID="k8s-pod-network.94c192a90bc65fb721c59f4a6372aeaba504b9654db44d43a8b8aee27be828c0" Workload="ci--4372.0.1--n--7d7ad51cdd-k8s-goldmane--768f4c5c69--97d7l-eth0" Jul 15 23:15:30.132923 containerd[1886]: 2025-07-15 23:15:30.102 [INFO][4940] cni-plugin/k8s.go 418: Populated endpoint ContainerID="94c192a90bc65fb721c59f4a6372aeaba504b9654db44d43a8b8aee27be828c0" Namespace="calico-system" Pod="goldmane-768f4c5c69-97d7l" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-goldmane--768f4c5c69--97d7l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--7d7ad51cdd-k8s-goldmane--768f4c5c69--97d7l-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"14848f7b-7d37-411e-b6ae-df88a913bfb4", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 15, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-7d7ad51cdd", ContainerID:"", Pod:"goldmane-768f4c5c69-97d7l", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.26.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali46e0681e44a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:15:30.132923 containerd[1886]: 2025-07-15 23:15:30.103 [INFO][4940] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.194/32] ContainerID="94c192a90bc65fb721c59f4a6372aeaba504b9654db44d43a8b8aee27be828c0" Namespace="calico-system" Pod="goldmane-768f4c5c69-97d7l" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-goldmane--768f4c5c69--97d7l-eth0" Jul 15 23:15:30.132975 containerd[1886]: 2025-07-15 23:15:30.103 [INFO][4940] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali46e0681e44a ContainerID="94c192a90bc65fb721c59f4a6372aeaba504b9654db44d43a8b8aee27be828c0" Namespace="calico-system" Pod="goldmane-768f4c5c69-97d7l" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-goldmane--768f4c5c69--97d7l-eth0" Jul 15 23:15:30.132975 containerd[1886]: 2025-07-15 23:15:30.110 [INFO][4940] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="94c192a90bc65fb721c59f4a6372aeaba504b9654db44d43a8b8aee27be828c0" Namespace="calico-system" Pod="goldmane-768f4c5c69-97d7l" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-goldmane--768f4c5c69--97d7l-eth0" Jul 15 23:15:30.133004 containerd[1886]: 2025-07-15 23:15:30.112 [INFO][4940] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="94c192a90bc65fb721c59f4a6372aeaba504b9654db44d43a8b8aee27be828c0" Namespace="calico-system" Pod="goldmane-768f4c5c69-97d7l" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-goldmane--768f4c5c69--97d7l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--7d7ad51cdd-k8s-goldmane--768f4c5c69--97d7l-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"14848f7b-7d37-411e-b6ae-df88a913bfb4", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 15, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-7d7ad51cdd", ContainerID:"94c192a90bc65fb721c59f4a6372aeaba504b9654db44d43a8b8aee27be828c0", Pod:"goldmane-768f4c5c69-97d7l", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.26.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali46e0681e44a", MAC:"7e:9d:a6:0a:7d:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:15:30.133037 containerd[1886]: 2025-07-15 23:15:30.128 [INFO][4940] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="94c192a90bc65fb721c59f4a6372aeaba504b9654db44d43a8b8aee27be828c0" Namespace="calico-system" Pod="goldmane-768f4c5c69-97d7l" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-goldmane--768f4c5c69--97d7l-eth0" Jul 15 23:15:30.191198 containerd[1886]: time="2025-07-15T23:15:30.191149087Z" level=info msg="connecting to shim 94c192a90bc65fb721c59f4a6372aeaba504b9654db44d43a8b8aee27be828c0" address="unix:///run/containerd/s/b1054c80fc545e307a3c537b7cf06231b1f740e987b7e27d788d66b72355aab2" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:15:30.212856 systemd-networkd[1577]: calid6322b97fa7: Link UP Jul 15 23:15:30.214825 systemd-networkd[1577]: calid6322b97fa7: Gained carrier Jul 15 23:15:30.230559 systemd[1]: Started cri-containerd-94c192a90bc65fb721c59f4a6372aeaba504b9654db44d43a8b8aee27be828c0.scope - libcontainer container 94c192a90bc65fb721c59f4a6372aeaba504b9654db44d43a8b8aee27be828c0. Jul 15 23:15:30.235411 containerd[1886]: 2025-07-15 23:15:30.025 [INFO][4949] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--n--7d7ad51cdd-k8s-calico--kube--controllers--69f8c8b5f9--754zd-eth0 calico-kube-controllers-69f8c8b5f9- calico-system c83f766e-eaa7-4861-9802-86c69a17d315 806 0 2025-07-15 23:15:09 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:69f8c8b5f9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4372.0.1-n-7d7ad51cdd calico-kube-controllers-69f8c8b5f9-754zd eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid6322b97fa7 [] [] }} ContainerID="6ecf48bb24d0f0e547f9cb7c1baabae716b1eeba3698ffd843609617369b8605" Namespace="calico-system" Pod="calico-kube-controllers-69f8c8b5f9-754zd" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-calico--kube--controllers--69f8c8b5f9--754zd-" Jul 15 23:15:30.235411 containerd[1886]: 2025-07-15 23:15:30.025 [INFO][4949] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6ecf48bb24d0f0e547f9cb7c1baabae716b1eeba3698ffd843609617369b8605" Namespace="calico-system" Pod="calico-kube-controllers-69f8c8b5f9-754zd" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-calico--kube--controllers--69f8c8b5f9--754zd-eth0" Jul 15 23:15:30.235411 containerd[1886]: 2025-07-15 23:15:30.064 [INFO][4982] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ecf48bb24d0f0e547f9cb7c1baabae716b1eeba3698ffd843609617369b8605" HandleID="k8s-pod-network.6ecf48bb24d0f0e547f9cb7c1baabae716b1eeba3698ffd843609617369b8605" Workload="ci--4372.0.1--n--7d7ad51cdd-k8s-calico--kube--controllers--69f8c8b5f9--754zd-eth0" Jul 15 23:15:30.235599 containerd[1886]: 2025-07-15 23:15:30.064 [INFO][4982] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6ecf48bb24d0f0e547f9cb7c1baabae716b1eeba3698ffd843609617369b8605" HandleID="k8s-pod-network.6ecf48bb24d0f0e547f9cb7c1baabae716b1eeba3698ffd843609617369b8605" Workload="ci--4372.0.1--n--7d7ad51cdd-k8s-calico--kube--controllers--69f8c8b5f9--754zd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b0a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.1-n-7d7ad51cdd", "pod":"calico-kube-controllers-69f8c8b5f9-754zd", "timestamp":"2025-07-15 23:15:30.064411557 +0000 UTC"}, Hostname:"ci-4372.0.1-n-7d7ad51cdd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:15:30.235599 containerd[1886]: 2025-07-15 23:15:30.064 [INFO][4982] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:15:30.235599 containerd[1886]: 2025-07-15 23:15:30.101 [INFO][4982] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:15:30.235599 containerd[1886]: 2025-07-15 23:15:30.101 [INFO][4982] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-n-7d7ad51cdd' Jul 15 23:15:30.235599 containerd[1886]: 2025-07-15 23:15:30.162 [INFO][4982] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6ecf48bb24d0f0e547f9cb7c1baabae716b1eeba3698ffd843609617369b8605" host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:30.235599 containerd[1886]: 2025-07-15 23:15:30.169 [INFO][4982] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:30.235599 containerd[1886]: 2025-07-15 23:15:30.174 [INFO][4982] ipam/ipam.go 511: Trying affinity for 192.168.26.192/26 host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:30.235599 containerd[1886]: 2025-07-15 23:15:30.176 [INFO][4982] ipam/ipam.go 158: Attempting to load block cidr=192.168.26.192/26 host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:30.235599 containerd[1886]: 2025-07-15 23:15:30.178 [INFO][4982] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.26.192/26 host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:30.236354 containerd[1886]: 2025-07-15 23:15:30.178 [INFO][4982] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.26.192/26 handle="k8s-pod-network.6ecf48bb24d0f0e547f9cb7c1baabae716b1eeba3698ffd843609617369b8605" host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:30.236354 containerd[1886]: 2025-07-15 23:15:30.181 [INFO][4982] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6ecf48bb24d0f0e547f9cb7c1baabae716b1eeba3698ffd843609617369b8605 Jul 15 23:15:30.236354 containerd[1886]: 2025-07-15 23:15:30.188 [INFO][4982] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.26.192/26 handle="k8s-pod-network.6ecf48bb24d0f0e547f9cb7c1baabae716b1eeba3698ffd843609617369b8605" host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:30.236354 containerd[1886]: 2025-07-15 23:15:30.200 [INFO][4982] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.26.195/26] block=192.168.26.192/26 handle="k8s-pod-network.6ecf48bb24d0f0e547f9cb7c1baabae716b1eeba3698ffd843609617369b8605" host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:30.236354 containerd[1886]: 2025-07-15 23:15:30.200 [INFO][4982] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.26.195/26] handle="k8s-pod-network.6ecf48bb24d0f0e547f9cb7c1baabae716b1eeba3698ffd843609617369b8605" host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:30.236354 containerd[1886]: 2025-07-15 23:15:30.201 [INFO][4982] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:15:30.236354 containerd[1886]: 2025-07-15 23:15:30.201 [INFO][4982] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.26.195/26] IPv6=[] ContainerID="6ecf48bb24d0f0e547f9cb7c1baabae716b1eeba3698ffd843609617369b8605" HandleID="k8s-pod-network.6ecf48bb24d0f0e547f9cb7c1baabae716b1eeba3698ffd843609617369b8605" Workload="ci--4372.0.1--n--7d7ad51cdd-k8s-calico--kube--controllers--69f8c8b5f9--754zd-eth0" Jul 15 23:15:30.236585 containerd[1886]: 2025-07-15 23:15:30.204 [INFO][4949] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6ecf48bb24d0f0e547f9cb7c1baabae716b1eeba3698ffd843609617369b8605" Namespace="calico-system" Pod="calico-kube-controllers-69f8c8b5f9-754zd" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-calico--kube--controllers--69f8c8b5f9--754zd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--7d7ad51cdd-k8s-calico--kube--controllers--69f8c8b5f9--754zd-eth0", GenerateName:"calico-kube-controllers-69f8c8b5f9-", Namespace:"calico-system", SelfLink:"", UID:"c83f766e-eaa7-4861-9802-86c69a17d315", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 15, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69f8c8b5f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-7d7ad51cdd", ContainerID:"", Pod:"calico-kube-controllers-69f8c8b5f9-754zd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.26.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid6322b97fa7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:15:30.236631 containerd[1886]: 2025-07-15 23:15:30.204 [INFO][4949] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.195/32] ContainerID="6ecf48bb24d0f0e547f9cb7c1baabae716b1eeba3698ffd843609617369b8605" Namespace="calico-system" Pod="calico-kube-controllers-69f8c8b5f9-754zd" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-calico--kube--controllers--69f8c8b5f9--754zd-eth0" Jul 15 23:15:30.236631 containerd[1886]: 2025-07-15 23:15:30.204 [INFO][4949] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid6322b97fa7 ContainerID="6ecf48bb24d0f0e547f9cb7c1baabae716b1eeba3698ffd843609617369b8605" Namespace="calico-system" Pod="calico-kube-controllers-69f8c8b5f9-754zd" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-calico--kube--controllers--69f8c8b5f9--754zd-eth0" Jul 15 23:15:30.236631 containerd[1886]: 2025-07-15 23:15:30.214 [INFO][4949] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ecf48bb24d0f0e547f9cb7c1baabae716b1eeba3698ffd843609617369b8605" Namespace="calico-system" Pod="calico-kube-controllers-69f8c8b5f9-754zd" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-calico--kube--controllers--69f8c8b5f9--754zd-eth0" Jul 15 23:15:30.236682 containerd[1886]: 2025-07-15 23:15:30.215 [INFO][4949] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6ecf48bb24d0f0e547f9cb7c1baabae716b1eeba3698ffd843609617369b8605" Namespace="calico-system" Pod="calico-kube-controllers-69f8c8b5f9-754zd" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-calico--kube--controllers--69f8c8b5f9--754zd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--7d7ad51cdd-k8s-calico--kube--controllers--69f8c8b5f9--754zd-eth0", GenerateName:"calico-kube-controllers-69f8c8b5f9-", Namespace:"calico-system", SelfLink:"", UID:"c83f766e-eaa7-4861-9802-86c69a17d315", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 15, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69f8c8b5f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-7d7ad51cdd", ContainerID:"6ecf48bb24d0f0e547f9cb7c1baabae716b1eeba3698ffd843609617369b8605", Pod:"calico-kube-controllers-69f8c8b5f9-754zd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.26.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid6322b97fa7", MAC:"ca:00:35:36:aa:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:15:30.236925 containerd[1886]: 2025-07-15 23:15:30.230 [INFO][4949] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6ecf48bb24d0f0e547f9cb7c1baabae716b1eeba3698ffd843609617369b8605" Namespace="calico-system" Pod="calico-kube-controllers-69f8c8b5f9-754zd" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-calico--kube--controllers--69f8c8b5f9--754zd-eth0" Jul 15 23:15:30.284598 containerd[1886]: time="2025-07-15T23:15:30.284556655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-97d7l,Uid:14848f7b-7d37-411e-b6ae-df88a913bfb4,Namespace:calico-system,Attempt:0,} returns sandbox id \"94c192a90bc65fb721c59f4a6372aeaba504b9654db44d43a8b8aee27be828c0\"" Jul 15 23:15:30.287620 containerd[1886]: time="2025-07-15T23:15:30.287498704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 15 23:15:30.298347 containerd[1886]: time="2025-07-15T23:15:30.298104811Z" level=info msg="connecting to shim 6ecf48bb24d0f0e547f9cb7c1baabae716b1eeba3698ffd843609617369b8605" address="unix:///run/containerd/s/6f92796652fa7b8782c22b69c62ff447bf1382ecbcfe703888ce90e712944b7d" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:15:30.309835 systemd-networkd[1577]: cali777d1000a32: Link UP Jul 15 23:15:30.312962 systemd-networkd[1577]: cali777d1000a32: Gained carrier Jul 15 23:15:30.334489 systemd[1]: Started cri-containerd-6ecf48bb24d0f0e547f9cb7c1baabae716b1eeba3698ffd843609617369b8605.scope - libcontainer container 6ecf48bb24d0f0e547f9cb7c1baabae716b1eeba3698ffd843609617369b8605. Jul 15 23:15:30.340117 containerd[1886]: 2025-07-15 23:15:30.046 [INFO][4963] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--n--7d7ad51cdd-k8s-csi--node--driver--dhhj5-eth0 csi-node-driver- calico-system ebad4995-7e5b-4942-987a-9ae9ac290621 678 0 2025-07-15 23:15:09 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4372.0.1-n-7d7ad51cdd csi-node-driver-dhhj5 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali777d1000a32 [] [] }} ContainerID="b6652ef0e0a98625ccca42d3460f732480eee4d12666c1cb6e0727ca3ccc4a8f" Namespace="calico-system" Pod="csi-node-driver-dhhj5" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-csi--node--driver--dhhj5-" Jul 15 23:15:30.340117 containerd[1886]: 2025-07-15 23:15:30.046 [INFO][4963] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b6652ef0e0a98625ccca42d3460f732480eee4d12666c1cb6e0727ca3ccc4a8f" Namespace="calico-system" Pod="csi-node-driver-dhhj5" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-csi--node--driver--dhhj5-eth0" Jul 15 23:15:30.340117 containerd[1886]: 2025-07-15 23:15:30.077 [INFO][4989] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b6652ef0e0a98625ccca42d3460f732480eee4d12666c1cb6e0727ca3ccc4a8f" HandleID="k8s-pod-network.b6652ef0e0a98625ccca42d3460f732480eee4d12666c1cb6e0727ca3ccc4a8f" Workload="ci--4372.0.1--n--7d7ad51cdd-k8s-csi--node--driver--dhhj5-eth0" Jul 15 23:15:30.340323 containerd[1886]: 2025-07-15 23:15:30.077 [INFO][4989] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b6652ef0e0a98625ccca42d3460f732480eee4d12666c1cb6e0727ca3ccc4a8f" HandleID="k8s-pod-network.b6652ef0e0a98625ccca42d3460f732480eee4d12666c1cb6e0727ca3ccc4a8f" Workload="ci--4372.0.1--n--7d7ad51cdd-k8s-csi--node--driver--dhhj5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b660), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.1-n-7d7ad51cdd", "pod":"csi-node-driver-dhhj5", "timestamp":"2025-07-15 23:15:30.077165586 +0000 UTC"}, Hostname:"ci-4372.0.1-n-7d7ad51cdd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:15:30.340323 containerd[1886]: 2025-07-15 23:15:30.077 [INFO][4989] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:15:30.340323 containerd[1886]: 2025-07-15 23:15:30.201 [INFO][4989] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:15:30.340323 containerd[1886]: 2025-07-15 23:15:30.201 [INFO][4989] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-n-7d7ad51cdd' Jul 15 23:15:30.340323 containerd[1886]: 2025-07-15 23:15:30.264 [INFO][4989] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b6652ef0e0a98625ccca42d3460f732480eee4d12666c1cb6e0727ca3ccc4a8f" host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:30.340323 containerd[1886]: 2025-07-15 23:15:30.273 [INFO][4989] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:30.340323 containerd[1886]: 2025-07-15 23:15:30.277 [INFO][4989] ipam/ipam.go 511: Trying affinity for 192.168.26.192/26 host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:30.340323 containerd[1886]: 2025-07-15 23:15:30.278 [INFO][4989] ipam/ipam.go 158: Attempting to load block cidr=192.168.26.192/26 host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:30.340323 containerd[1886]: 2025-07-15 23:15:30.280 [INFO][4989] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.26.192/26 host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:30.340461 containerd[1886]: 2025-07-15 23:15:30.280 [INFO][4989] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.26.192/26 handle="k8s-pod-network.b6652ef0e0a98625ccca42d3460f732480eee4d12666c1cb6e0727ca3ccc4a8f" host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:30.340461 containerd[1886]: 2025-07-15 23:15:30.281 [INFO][4989] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b6652ef0e0a98625ccca42d3460f732480eee4d12666c1cb6e0727ca3ccc4a8f Jul 15 23:15:30.340461 containerd[1886]: 2025-07-15 23:15:30.287 [INFO][4989] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.26.192/26 handle="k8s-pod-network.b6652ef0e0a98625ccca42d3460f732480eee4d12666c1cb6e0727ca3ccc4a8f" host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:30.340461 containerd[1886]: 2025-07-15 23:15:30.303 [INFO][4989] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.26.196/26] block=192.168.26.192/26 handle="k8s-pod-network.b6652ef0e0a98625ccca42d3460f732480eee4d12666c1cb6e0727ca3ccc4a8f" host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:30.340461 containerd[1886]: 2025-07-15 23:15:30.303 [INFO][4989] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.26.196/26] handle="k8s-pod-network.b6652ef0e0a98625ccca42d3460f732480eee4d12666c1cb6e0727ca3ccc4a8f" host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:30.340461 containerd[1886]: 2025-07-15 23:15:30.303 [INFO][4989] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:15:30.340461 containerd[1886]: 2025-07-15 23:15:30.303 [INFO][4989] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.26.196/26] IPv6=[] ContainerID="b6652ef0e0a98625ccca42d3460f732480eee4d12666c1cb6e0727ca3ccc4a8f" HandleID="k8s-pod-network.b6652ef0e0a98625ccca42d3460f732480eee4d12666c1cb6e0727ca3ccc4a8f" Workload="ci--4372.0.1--n--7d7ad51cdd-k8s-csi--node--driver--dhhj5-eth0" Jul 15 23:15:30.340555 containerd[1886]: 2025-07-15 23:15:30.305 [INFO][4963] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b6652ef0e0a98625ccca42d3460f732480eee4d12666c1cb6e0727ca3ccc4a8f" Namespace="calico-system" Pod="csi-node-driver-dhhj5" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-csi--node--driver--dhhj5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--7d7ad51cdd-k8s-csi--node--driver--dhhj5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ebad4995-7e5b-4942-987a-9ae9ac290621", ResourceVersion:"678", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 15, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-7d7ad51cdd", ContainerID:"", Pod:"csi-node-driver-dhhj5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.26.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali777d1000a32", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:15:30.340589 containerd[1886]: 2025-07-15 23:15:30.305 [INFO][4963] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.196/32] ContainerID="b6652ef0e0a98625ccca42d3460f732480eee4d12666c1cb6e0727ca3ccc4a8f" Namespace="calico-system" Pod="csi-node-driver-dhhj5" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-csi--node--driver--dhhj5-eth0" Jul 15 23:15:30.340589 containerd[1886]: 2025-07-15 23:15:30.305 [INFO][4963] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali777d1000a32 ContainerID="b6652ef0e0a98625ccca42d3460f732480eee4d12666c1cb6e0727ca3ccc4a8f" Namespace="calico-system" Pod="csi-node-driver-dhhj5" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-csi--node--driver--dhhj5-eth0" Jul 15 23:15:30.340589 containerd[1886]: 2025-07-15 23:15:30.314 [INFO][4963] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b6652ef0e0a98625ccca42d3460f732480eee4d12666c1cb6e0727ca3ccc4a8f" Namespace="calico-system" Pod="csi-node-driver-dhhj5" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-csi--node--driver--dhhj5-eth0" Jul 15 23:15:30.340633 containerd[1886]: 2025-07-15 23:15:30.318 [INFO][4963] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b6652ef0e0a98625ccca42d3460f732480eee4d12666c1cb6e0727ca3ccc4a8f" Namespace="calico-system" Pod="csi-node-driver-dhhj5" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-csi--node--driver--dhhj5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--7d7ad51cdd-k8s-csi--node--driver--dhhj5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ebad4995-7e5b-4942-987a-9ae9ac290621", ResourceVersion:"678", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 15, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-7d7ad51cdd", ContainerID:"b6652ef0e0a98625ccca42d3460f732480eee4d12666c1cb6e0727ca3ccc4a8f", Pod:"csi-node-driver-dhhj5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.26.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali777d1000a32", MAC:"32:32:dc:f7:3d:f8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:15:30.340665 containerd[1886]: 2025-07-15 23:15:30.335 [INFO][4963] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b6652ef0e0a98625ccca42d3460f732480eee4d12666c1cb6e0727ca3ccc4a8f" Namespace="calico-system" Pod="csi-node-driver-dhhj5" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-csi--node--driver--dhhj5-eth0" Jul 15 23:15:30.378551 containerd[1886]: time="2025-07-15T23:15:30.378510271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69f8c8b5f9-754zd,Uid:c83f766e-eaa7-4861-9802-86c69a17d315,Namespace:calico-system,Attempt:0,} returns sandbox id \"6ecf48bb24d0f0e547f9cb7c1baabae716b1eeba3698ffd843609617369b8605\"" Jul 15 23:15:30.404562 containerd[1886]: time="2025-07-15T23:15:30.404518568Z" level=info msg="connecting to shim b6652ef0e0a98625ccca42d3460f732480eee4d12666c1cb6e0727ca3ccc4a8f" address="unix:///run/containerd/s/903513f56735bba147d9676d7e40bdaa5836a0b40683a4c5a2aa2ca9954cf560" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:15:30.423422 systemd[1]: Started cri-containerd-b6652ef0e0a98625ccca42d3460f732480eee4d12666c1cb6e0727ca3ccc4a8f.scope - libcontainer container b6652ef0e0a98625ccca42d3460f732480eee4d12666c1cb6e0727ca3ccc4a8f. Jul 15 23:15:30.448239 containerd[1886]: time="2025-07-15T23:15:30.448194509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dhhj5,Uid:ebad4995-7e5b-4942-987a-9ae9ac290621,Namespace:calico-system,Attempt:0,} returns sandbox id \"b6652ef0e0a98625ccca42d3460f732480eee4d12666c1cb6e0727ca3ccc4a8f\"" Jul 15 23:15:30.956654 containerd[1886]: time="2025-07-15T23:15:30.956354423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5684c645d7-7hkgz,Uid:35baaac4-a554-4613-a44f-3a7d53ede3f7,Namespace:calico-apiserver,Attempt:0,}" Jul 15 23:15:31.055926 systemd-networkd[1577]: calibfd27b5e7fe: Link UP Jul 15 23:15:31.057044 systemd-networkd[1577]: calibfd27b5e7fe: Gained carrier Jul 15 23:15:31.073860 containerd[1886]: 2025-07-15 23:15:30.993 [INFO][5164] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--n--7d7ad51cdd-k8s-calico--apiserver--5684c645d7--7hkgz-eth0 calico-apiserver-5684c645d7- calico-apiserver 35baaac4-a554-4613-a44f-3a7d53ede3f7 809 0 2025-07-15 23:15:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5684c645d7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.0.1-n-7d7ad51cdd calico-apiserver-5684c645d7-7hkgz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibfd27b5e7fe [] [] }} ContainerID="6aa7289486f363b591223cf97e314be73682d53ec3a2d599b4e331b30068026e" Namespace="calico-apiserver" Pod="calico-apiserver-5684c645d7-7hkgz" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-calico--apiserver--5684c645d7--7hkgz-" Jul 15 23:15:31.073860 containerd[1886]: 2025-07-15 23:15:30.994 [INFO][5164] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6aa7289486f363b591223cf97e314be73682d53ec3a2d599b4e331b30068026e" Namespace="calico-apiserver" Pod="calico-apiserver-5684c645d7-7hkgz" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-calico--apiserver--5684c645d7--7hkgz-eth0" Jul 15 23:15:31.073860 containerd[1886]: 2025-07-15 23:15:31.013 [INFO][5176] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6aa7289486f363b591223cf97e314be73682d53ec3a2d599b4e331b30068026e" HandleID="k8s-pod-network.6aa7289486f363b591223cf97e314be73682d53ec3a2d599b4e331b30068026e" Workload="ci--4372.0.1--n--7d7ad51cdd-k8s-calico--apiserver--5684c645d7--7hkgz-eth0" Jul 15 23:15:31.074446 containerd[1886]: 2025-07-15 23:15:31.013 [INFO][5176] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6aa7289486f363b591223cf97e314be73682d53ec3a2d599b4e331b30068026e" HandleID="k8s-pod-network.6aa7289486f363b591223cf97e314be73682d53ec3a2d599b4e331b30068026e" Workload="ci--4372.0.1--n--7d7ad51cdd-k8s-calico--apiserver--5684c645d7--7hkgz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b280), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.0.1-n-7d7ad51cdd", "pod":"calico-apiserver-5684c645d7-7hkgz", "timestamp":"2025-07-15 23:15:31.013514478 +0000 UTC"}, Hostname:"ci-4372.0.1-n-7d7ad51cdd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:15:31.074446 containerd[1886]: 2025-07-15 23:15:31.013 [INFO][5176] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:15:31.074446 containerd[1886]: 2025-07-15 23:15:31.013 [INFO][5176] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:15:31.074446 containerd[1886]: 2025-07-15 23:15:31.013 [INFO][5176] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-n-7d7ad51cdd' Jul 15 23:15:31.074446 containerd[1886]: 2025-07-15 23:15:31.021 [INFO][5176] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6aa7289486f363b591223cf97e314be73682d53ec3a2d599b4e331b30068026e" host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:31.074446 containerd[1886]: 2025-07-15 23:15:31.025 [INFO][5176] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:31.074446 containerd[1886]: 2025-07-15 23:15:31.031 [INFO][5176] ipam/ipam.go 511: Trying affinity for 192.168.26.192/26 host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:31.074446 containerd[1886]: 2025-07-15 23:15:31.032 [INFO][5176] ipam/ipam.go 158: Attempting to load block cidr=192.168.26.192/26 host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:31.074446 containerd[1886]: 2025-07-15 23:15:31.034 [INFO][5176] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.26.192/26 host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:31.074610 containerd[1886]: 2025-07-15 23:15:31.034 [INFO][5176] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.26.192/26 handle="k8s-pod-network.6aa7289486f363b591223cf97e314be73682d53ec3a2d599b4e331b30068026e" host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:31.074610 containerd[1886]: 2025-07-15 23:15:31.036 [INFO][5176] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6aa7289486f363b591223cf97e314be73682d53ec3a2d599b4e331b30068026e Jul 15 23:15:31.074610 containerd[1886]: 2025-07-15 23:15:31.040 [INFO][5176] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.26.192/26 handle="k8s-pod-network.6aa7289486f363b591223cf97e314be73682d53ec3a2d599b4e331b30068026e" host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:31.074610 containerd[1886]: 2025-07-15 23:15:31.050 [INFO][5176] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.26.197/26] block=192.168.26.192/26 handle="k8s-pod-network.6aa7289486f363b591223cf97e314be73682d53ec3a2d599b4e331b30068026e" host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:31.074610 containerd[1886]: 2025-07-15 23:15:31.050 [INFO][5176] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.26.197/26] handle="k8s-pod-network.6aa7289486f363b591223cf97e314be73682d53ec3a2d599b4e331b30068026e" host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:31.074610 containerd[1886]: 2025-07-15 23:15:31.051 [INFO][5176] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:15:31.074610 containerd[1886]: 2025-07-15 23:15:31.051 [INFO][5176] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.26.197/26] IPv6=[] ContainerID="6aa7289486f363b591223cf97e314be73682d53ec3a2d599b4e331b30068026e" HandleID="k8s-pod-network.6aa7289486f363b591223cf97e314be73682d53ec3a2d599b4e331b30068026e" Workload="ci--4372.0.1--n--7d7ad51cdd-k8s-calico--apiserver--5684c645d7--7hkgz-eth0" Jul 15 23:15:31.074708 containerd[1886]: 2025-07-15 23:15:31.052 [INFO][5164] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6aa7289486f363b591223cf97e314be73682d53ec3a2d599b4e331b30068026e" Namespace="calico-apiserver" Pod="calico-apiserver-5684c645d7-7hkgz" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-calico--apiserver--5684c645d7--7hkgz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--7d7ad51cdd-k8s-calico--apiserver--5684c645d7--7hkgz-eth0", GenerateName:"calico-apiserver-5684c645d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"35baaac4-a554-4613-a44f-3a7d53ede3f7", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 15, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5684c645d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-7d7ad51cdd", ContainerID:"", Pod:"calico-apiserver-5684c645d7-7hkgz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibfd27b5e7fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:15:31.074743 containerd[1886]: 2025-07-15 23:15:31.052 [INFO][5164] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.197/32] ContainerID="6aa7289486f363b591223cf97e314be73682d53ec3a2d599b4e331b30068026e" Namespace="calico-apiserver" Pod="calico-apiserver-5684c645d7-7hkgz" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-calico--apiserver--5684c645d7--7hkgz-eth0" Jul 15 23:15:31.074743 containerd[1886]: 2025-07-15 23:15:31.052 [INFO][5164] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibfd27b5e7fe ContainerID="6aa7289486f363b591223cf97e314be73682d53ec3a2d599b4e331b30068026e" Namespace="calico-apiserver" Pod="calico-apiserver-5684c645d7-7hkgz" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-calico--apiserver--5684c645d7--7hkgz-eth0" Jul 15 23:15:31.074743 containerd[1886]: 2025-07-15 23:15:31.056 [INFO][5164] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6aa7289486f363b591223cf97e314be73682d53ec3a2d599b4e331b30068026e" Namespace="calico-apiserver" Pod="calico-apiserver-5684c645d7-7hkgz" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-calico--apiserver--5684c645d7--7hkgz-eth0" Jul 15 23:15:31.074797 containerd[1886]: 2025-07-15 23:15:31.056 [INFO][5164] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6aa7289486f363b591223cf97e314be73682d53ec3a2d599b4e331b30068026e" Namespace="calico-apiserver" Pod="calico-apiserver-5684c645d7-7hkgz" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-calico--apiserver--5684c645d7--7hkgz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--7d7ad51cdd-k8s-calico--apiserver--5684c645d7--7hkgz-eth0", GenerateName:"calico-apiserver-5684c645d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"35baaac4-a554-4613-a44f-3a7d53ede3f7", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 15, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5684c645d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-7d7ad51cdd", ContainerID:"6aa7289486f363b591223cf97e314be73682d53ec3a2d599b4e331b30068026e", Pod:"calico-apiserver-5684c645d7-7hkgz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibfd27b5e7fe", MAC:"e6:e1:a4:5a:44:c4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:15:31.074830 containerd[1886]: 2025-07-15 23:15:31.071 [INFO][5164] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6aa7289486f363b591223cf97e314be73682d53ec3a2d599b4e331b30068026e" Namespace="calico-apiserver" Pod="calico-apiserver-5684c645d7-7hkgz" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-calico--apiserver--5684c645d7--7hkgz-eth0" Jul 15 23:15:31.132728 containerd[1886]: time="2025-07-15T23:15:31.132649071Z" level=info msg="connecting to shim 6aa7289486f363b591223cf97e314be73682d53ec3a2d599b4e331b30068026e" address="unix:///run/containerd/s/8c89b0b7c6c650b185aee1dd7ca1e17198dd6c3572996affbc8019dde97576e8" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:15:31.152436 systemd[1]: Started cri-containerd-6aa7289486f363b591223cf97e314be73682d53ec3a2d599b4e331b30068026e.scope - libcontainer container 6aa7289486f363b591223cf97e314be73682d53ec3a2d599b4e331b30068026e. Jul 15 23:15:31.193120 containerd[1886]: time="2025-07-15T23:15:31.193061743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5684c645d7-7hkgz,Uid:35baaac4-a554-4613-a44f-3a7d53ede3f7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6aa7289486f363b591223cf97e314be73682d53ec3a2d599b4e331b30068026e\"" Jul 15 23:15:31.409601 systemd-networkd[1577]: cali46e0681e44a: Gained IPv6LL Jul 15 23:15:31.857462 systemd-networkd[1577]: cali777d1000a32: Gained IPv6LL Jul 15 23:15:31.957426 containerd[1886]: time="2025-07-15T23:15:31.957132727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5684c645d7-ljgbp,Uid:b8795292-5562-4461-a9bd-45b3ce7229be,Namespace:calico-apiserver,Attempt:0,}" Jul 15 23:15:31.957426 containerd[1886]: time="2025-07-15T23:15:31.957134391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vfkqv,Uid:379e70c3-12ee-428a-a67d-5cf64f2aa25c,Namespace:kube-system,Attempt:0,}" Jul 15 23:15:32.049408 systemd-networkd[1577]: calid6322b97fa7: Gained IPv6LL Jul 15 23:15:32.130498 systemd-networkd[1577]: cali72ae6124c89: Link UP Jul 15 23:15:32.132857 systemd-networkd[1577]: cali72ae6124c89: Gained carrier Jul 15 23:15:32.158909 containerd[1886]: 2025-07-15 23:15:32.027 [INFO][5245] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--n--7d7ad51cdd-k8s-calico--apiserver--5684c645d7--ljgbp-eth0 calico-apiserver-5684c645d7- calico-apiserver b8795292-5562-4461-a9bd-45b3ce7229be 808 0 2025-07-15 23:15:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5684c645d7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.0.1-n-7d7ad51cdd calico-apiserver-5684c645d7-ljgbp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali72ae6124c89 [] [] }} ContainerID="400ee731c2522f28237cd0ec3fefa634eba4b51c31633f47b4fb986e4a253ed8" Namespace="calico-apiserver" Pod="calico-apiserver-5684c645d7-ljgbp" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-calico--apiserver--5684c645d7--ljgbp-" Jul 15 23:15:32.158909 containerd[1886]: 2025-07-15 23:15:32.027 [INFO][5245] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="400ee731c2522f28237cd0ec3fefa634eba4b51c31633f47b4fb986e4a253ed8" Namespace="calico-apiserver" Pod="calico-apiserver-5684c645d7-ljgbp" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-calico--apiserver--5684c645d7--ljgbp-eth0" Jul 15 23:15:32.158909 containerd[1886]: 2025-07-15 23:15:32.066 [INFO][5270] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="400ee731c2522f28237cd0ec3fefa634eba4b51c31633f47b4fb986e4a253ed8" HandleID="k8s-pod-network.400ee731c2522f28237cd0ec3fefa634eba4b51c31633f47b4fb986e4a253ed8" Workload="ci--4372.0.1--n--7d7ad51cdd-k8s-calico--apiserver--5684c645d7--ljgbp-eth0" Jul 15 23:15:32.160542 containerd[1886]: 2025-07-15 23:15:32.066 [INFO][5270] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="400ee731c2522f28237cd0ec3fefa634eba4b51c31633f47b4fb986e4a253ed8" HandleID="k8s-pod-network.400ee731c2522f28237cd0ec3fefa634eba4b51c31633f47b4fb986e4a253ed8" Workload="ci--4372.0.1--n--7d7ad51cdd-k8s-calico--apiserver--5684c645d7--ljgbp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d37b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.0.1-n-7d7ad51cdd", "pod":"calico-apiserver-5684c645d7-ljgbp", "timestamp":"2025-07-15 23:15:32.066501173 +0000 UTC"}, Hostname:"ci-4372.0.1-n-7d7ad51cdd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:15:32.160542 containerd[1886]: 2025-07-15 23:15:32.066 [INFO][5270] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:15:32.160542 containerd[1886]: 2025-07-15 23:15:32.066 [INFO][5270] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:15:32.160542 containerd[1886]: 2025-07-15 23:15:32.066 [INFO][5270] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-n-7d7ad51cdd' Jul 15 23:15:32.160542 containerd[1886]: 2025-07-15 23:15:32.079 [INFO][5270] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.400ee731c2522f28237cd0ec3fefa634eba4b51c31633f47b4fb986e4a253ed8" host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:32.160542 containerd[1886]: 2025-07-15 23:15:32.084 [INFO][5270] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:32.160542 containerd[1886]: 2025-07-15 23:15:32.091 [INFO][5270] ipam/ipam.go 511: Trying affinity for 192.168.26.192/26 host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:32.160542 containerd[1886]: 2025-07-15 23:15:32.094 [INFO][5270] ipam/ipam.go 158: Attempting to load block cidr=192.168.26.192/26 host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:32.160542 containerd[1886]: 2025-07-15 23:15:32.097 [INFO][5270] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.26.192/26 host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:32.160700 containerd[1886]: 2025-07-15 23:15:32.097 [INFO][5270] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.26.192/26 handle="k8s-pod-network.400ee731c2522f28237cd0ec3fefa634eba4b51c31633f47b4fb986e4a253ed8" host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:32.160700 containerd[1886]: 2025-07-15 23:15:32.100 [INFO][5270] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.400ee731c2522f28237cd0ec3fefa634eba4b51c31633f47b4fb986e4a253ed8 Jul 15 23:15:32.160700 containerd[1886]: 2025-07-15 23:15:32.110 [INFO][5270] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.26.192/26 handle="k8s-pod-network.400ee731c2522f28237cd0ec3fefa634eba4b51c31633f47b4fb986e4a253ed8" host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:32.160700 containerd[1886]: 2025-07-15 23:15:32.122 [INFO][5270] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.26.198/26] block=192.168.26.192/26 handle="k8s-pod-network.400ee731c2522f28237cd0ec3fefa634eba4b51c31633f47b4fb986e4a253ed8" host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:32.160700 containerd[1886]: 2025-07-15 23:15:32.122 [INFO][5270] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.26.198/26] handle="k8s-pod-network.400ee731c2522f28237cd0ec3fefa634eba4b51c31633f47b4fb986e4a253ed8" host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:32.160700 containerd[1886]: 2025-07-15 23:15:32.122 [INFO][5270] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:15:32.160700 containerd[1886]: 2025-07-15 23:15:32.122 [INFO][5270] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.26.198/26] IPv6=[] ContainerID="400ee731c2522f28237cd0ec3fefa634eba4b51c31633f47b4fb986e4a253ed8" HandleID="k8s-pod-network.400ee731c2522f28237cd0ec3fefa634eba4b51c31633f47b4fb986e4a253ed8" Workload="ci--4372.0.1--n--7d7ad51cdd-k8s-calico--apiserver--5684c645d7--ljgbp-eth0" Jul 15 23:15:32.160794 containerd[1886]: 2025-07-15 23:15:32.125 [INFO][5245] cni-plugin/k8s.go 418: Populated endpoint ContainerID="400ee731c2522f28237cd0ec3fefa634eba4b51c31633f47b4fb986e4a253ed8" Namespace="calico-apiserver" Pod="calico-apiserver-5684c645d7-ljgbp" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-calico--apiserver--5684c645d7--ljgbp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--7d7ad51cdd-k8s-calico--apiserver--5684c645d7--ljgbp-eth0", GenerateName:"calico-apiserver-5684c645d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"b8795292-5562-4461-a9bd-45b3ce7229be", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 15, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5684c645d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-7d7ad51cdd", ContainerID:"", Pod:"calico-apiserver-5684c645d7-ljgbp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali72ae6124c89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:15:32.160828 containerd[1886]: 2025-07-15 23:15:32.125 [INFO][5245] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.198/32] ContainerID="400ee731c2522f28237cd0ec3fefa634eba4b51c31633f47b4fb986e4a253ed8" Namespace="calico-apiserver" Pod="calico-apiserver-5684c645d7-ljgbp" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-calico--apiserver--5684c645d7--ljgbp-eth0" Jul 15 23:15:32.160828 containerd[1886]: 2025-07-15 23:15:32.125 [INFO][5245] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali72ae6124c89 ContainerID="400ee731c2522f28237cd0ec3fefa634eba4b51c31633f47b4fb986e4a253ed8" Namespace="calico-apiserver" Pod="calico-apiserver-5684c645d7-ljgbp" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-calico--apiserver--5684c645d7--ljgbp-eth0" Jul 15 23:15:32.160828 containerd[1886]: 2025-07-15 23:15:32.134 [INFO][5245] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="400ee731c2522f28237cd0ec3fefa634eba4b51c31633f47b4fb986e4a253ed8" Namespace="calico-apiserver" Pod="calico-apiserver-5684c645d7-ljgbp" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-calico--apiserver--5684c645d7--ljgbp-eth0" Jul 15 23:15:32.160871 containerd[1886]: 2025-07-15 23:15:32.134 [INFO][5245] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="400ee731c2522f28237cd0ec3fefa634eba4b51c31633f47b4fb986e4a253ed8" Namespace="calico-apiserver" Pod="calico-apiserver-5684c645d7-ljgbp" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-calico--apiserver--5684c645d7--ljgbp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--7d7ad51cdd-k8s-calico--apiserver--5684c645d7--ljgbp-eth0", GenerateName:"calico-apiserver-5684c645d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"b8795292-5562-4461-a9bd-45b3ce7229be", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 15, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5684c645d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-7d7ad51cdd", ContainerID:"400ee731c2522f28237cd0ec3fefa634eba4b51c31633f47b4fb986e4a253ed8", Pod:"calico-apiserver-5684c645d7-ljgbp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali72ae6124c89", MAC:"0a:3d:87:85:c7:d2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:15:32.160903 containerd[1886]: 2025-07-15 23:15:32.155 [INFO][5245] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="400ee731c2522f28237cd0ec3fefa634eba4b51c31633f47b4fb986e4a253ed8" Namespace="calico-apiserver" Pod="calico-apiserver-5684c645d7-ljgbp" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-calico--apiserver--5684c645d7--ljgbp-eth0" Jul 15 23:15:32.241530 systemd-networkd[1577]: calibfd27b5e7fe: Gained IPv6LL Jul 15 23:15:32.248698 containerd[1886]: time="2025-07-15T23:15:32.248640046Z" level=info msg="connecting to shim 400ee731c2522f28237cd0ec3fefa634eba4b51c31633f47b4fb986e4a253ed8" address="unix:///run/containerd/s/4fadef8804df5d0167cb8608477d72a69346f2cf1b8cbe3d67fe0a21153152c4" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:15:32.265620 systemd-networkd[1577]: caliefe8b4373bd: Link UP Jul 15 23:15:32.267854 systemd-networkd[1577]: caliefe8b4373bd: Gained carrier Jul 15 23:15:32.294984 containerd[1886]: 2025-07-15 23:15:32.060 [INFO][5256] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--n--7d7ad51cdd-k8s-coredns--674b8bbfcf--vfkqv-eth0 coredns-674b8bbfcf- kube-system 379e70c3-12ee-428a-a67d-5cf64f2aa25c 804 0 2025-07-15 23:14:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372.0.1-n-7d7ad51cdd coredns-674b8bbfcf-vfkqv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliefe8b4373bd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ba741324739c54c6cd930434d56b27c475ff196a85483fa968eddcba633edc75" Namespace="kube-system" Pod="coredns-674b8bbfcf-vfkqv" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-coredns--674b8bbfcf--vfkqv-" Jul 15 23:15:32.294984 containerd[1886]: 2025-07-15 23:15:32.061 [INFO][5256] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ba741324739c54c6cd930434d56b27c475ff196a85483fa968eddcba633edc75" Namespace="kube-system" Pod="coredns-674b8bbfcf-vfkqv" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-coredns--674b8bbfcf--vfkqv-eth0" Jul 15 23:15:32.294984 containerd[1886]: 2025-07-15 23:15:32.108 [INFO][5278] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ba741324739c54c6cd930434d56b27c475ff196a85483fa968eddcba633edc75" HandleID="k8s-pod-network.ba741324739c54c6cd930434d56b27c475ff196a85483fa968eddcba633edc75" Workload="ci--4372.0.1--n--7d7ad51cdd-k8s-coredns--674b8bbfcf--vfkqv-eth0" Jul 15 23:15:32.295181 containerd[1886]: 2025-07-15 23:15:32.108 [INFO][5278] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ba741324739c54c6cd930434d56b27c475ff196a85483fa968eddcba633edc75" HandleID="k8s-pod-network.ba741324739c54c6cd930434d56b27c475ff196a85483fa968eddcba633edc75" Workload="ci--4372.0.1--n--7d7ad51cdd-k8s-coredns--674b8bbfcf--vfkqv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3120), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372.0.1-n-7d7ad51cdd", "pod":"coredns-674b8bbfcf-vfkqv", "timestamp":"2025-07-15 23:15:32.108262566 +0000 UTC"}, Hostname:"ci-4372.0.1-n-7d7ad51cdd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:15:32.295181 containerd[1886]: 2025-07-15 23:15:32.108 [INFO][5278] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:15:32.295181 containerd[1886]: 2025-07-15 23:15:32.122 [INFO][5278] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:15:32.295181 containerd[1886]: 2025-07-15 23:15:32.123 [INFO][5278] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-n-7d7ad51cdd' Jul 15 23:15:32.295181 containerd[1886]: 2025-07-15 23:15:32.180 [INFO][5278] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ba741324739c54c6cd930434d56b27c475ff196a85483fa968eddcba633edc75" host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:32.295181 containerd[1886]: 2025-07-15 23:15:32.186 [INFO][5278] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:32.295181 containerd[1886]: 2025-07-15 23:15:32.206 [INFO][5278] ipam/ipam.go 511: Trying affinity for 192.168.26.192/26 host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:32.295181 containerd[1886]: 2025-07-15 23:15:32.209 [INFO][5278] ipam/ipam.go 158: Attempting to load block cidr=192.168.26.192/26 host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:32.295181 containerd[1886]: 2025-07-15 23:15:32.216 [INFO][5278] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.26.192/26 host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:32.296725 containerd[1886]: 2025-07-15 23:15:32.216 [INFO][5278] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.26.192/26 handle="k8s-pod-network.ba741324739c54c6cd930434d56b27c475ff196a85483fa968eddcba633edc75" host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:32.296725 containerd[1886]: 2025-07-15 23:15:32.218 [INFO][5278] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ba741324739c54c6cd930434d56b27c475ff196a85483fa968eddcba633edc75 Jul 15 23:15:32.296725 containerd[1886]: 2025-07-15 23:15:32.232 [INFO][5278] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.26.192/26 handle="k8s-pod-network.ba741324739c54c6cd930434d56b27c475ff196a85483fa968eddcba633edc75" host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:32.296725 containerd[1886]: 2025-07-15 23:15:32.253 [INFO][5278] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.26.199/26] block=192.168.26.192/26 handle="k8s-pod-network.ba741324739c54c6cd930434d56b27c475ff196a85483fa968eddcba633edc75" host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:32.296725 containerd[1886]: 2025-07-15 23:15:32.253 [INFO][5278] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.26.199/26] handle="k8s-pod-network.ba741324739c54c6cd930434d56b27c475ff196a85483fa968eddcba633edc75" host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:32.296725 containerd[1886]: 2025-07-15 23:15:32.253 [INFO][5278] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:15:32.296725 containerd[1886]: 2025-07-15 23:15:32.253 [INFO][5278] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.26.199/26] IPv6=[] ContainerID="ba741324739c54c6cd930434d56b27c475ff196a85483fa968eddcba633edc75" HandleID="k8s-pod-network.ba741324739c54c6cd930434d56b27c475ff196a85483fa968eddcba633edc75" Workload="ci--4372.0.1--n--7d7ad51cdd-k8s-coredns--674b8bbfcf--vfkqv-eth0" Jul 15 23:15:32.297207 containerd[1886]: 2025-07-15 23:15:32.259 [INFO][5256] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ba741324739c54c6cd930434d56b27c475ff196a85483fa968eddcba633edc75" Namespace="kube-system" Pod="coredns-674b8bbfcf-vfkqv" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-coredns--674b8bbfcf--vfkqv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--7d7ad51cdd-k8s-coredns--674b8bbfcf--vfkqv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"379e70c3-12ee-428a-a67d-5cf64f2aa25c", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 14, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-7d7ad51cdd", ContainerID:"", Pod:"coredns-674b8bbfcf-vfkqv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliefe8b4373bd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:15:32.297207 containerd[1886]: 2025-07-15 23:15:32.261 [INFO][5256] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.199/32] ContainerID="ba741324739c54c6cd930434d56b27c475ff196a85483fa968eddcba633edc75" Namespace="kube-system" Pod="coredns-674b8bbfcf-vfkqv" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-coredns--674b8bbfcf--vfkqv-eth0" Jul 15 23:15:32.297207 containerd[1886]: 2025-07-15 23:15:32.261 [INFO][5256] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliefe8b4373bd ContainerID="ba741324739c54c6cd930434d56b27c475ff196a85483fa968eddcba633edc75" Namespace="kube-system" Pod="coredns-674b8bbfcf-vfkqv" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-coredns--674b8bbfcf--vfkqv-eth0" Jul 15 23:15:32.297207 containerd[1886]: 2025-07-15 23:15:32.267 [INFO][5256] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ba741324739c54c6cd930434d56b27c475ff196a85483fa968eddcba633edc75" Namespace="kube-system" Pod="coredns-674b8bbfcf-vfkqv" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-coredns--674b8bbfcf--vfkqv-eth0" Jul 15 23:15:32.297207 containerd[1886]: 2025-07-15 23:15:32.271 [INFO][5256] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ba741324739c54c6cd930434d56b27c475ff196a85483fa968eddcba633edc75" Namespace="kube-system" Pod="coredns-674b8bbfcf-vfkqv" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-coredns--674b8bbfcf--vfkqv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--7d7ad51cdd-k8s-coredns--674b8bbfcf--vfkqv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"379e70c3-12ee-428a-a67d-5cf64f2aa25c", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 14, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-7d7ad51cdd", ContainerID:"ba741324739c54c6cd930434d56b27c475ff196a85483fa968eddcba633edc75", Pod:"coredns-674b8bbfcf-vfkqv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliefe8b4373bd", MAC:"22:15:47:a4:0a:c0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:15:32.297207 containerd[1886]: 2025-07-15 23:15:32.290 [INFO][5256] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ba741324739c54c6cd930434d56b27c475ff196a85483fa968eddcba633edc75" Namespace="kube-system" Pod="coredns-674b8bbfcf-vfkqv" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-coredns--674b8bbfcf--vfkqv-eth0" Jul 15 23:15:32.302469 systemd[1]: Started cri-containerd-400ee731c2522f28237cd0ec3fefa634eba4b51c31633f47b4fb986e4a253ed8.scope - libcontainer container 400ee731c2522f28237cd0ec3fefa634eba4b51c31633f47b4fb986e4a253ed8. Jul 15 23:15:32.314116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1854128192.mount: Deactivated successfully. Jul 15 23:15:32.358459 containerd[1886]: time="2025-07-15T23:15:32.358418559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5684c645d7-ljgbp,Uid:b8795292-5562-4461-a9bd-45b3ce7229be,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"400ee731c2522f28237cd0ec3fefa634eba4b51c31633f47b4fb986e4a253ed8\"" Jul 15 23:15:32.398149 containerd[1886]: time="2025-07-15T23:15:32.397999796Z" level=info msg="connecting to shim ba741324739c54c6cd930434d56b27c475ff196a85483fa968eddcba633edc75" address="unix:///run/containerd/s/365d3d27bce621ef32e896456ef6a2859369db84da26cfa4e4b825844646dfb6" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:15:32.427483 systemd[1]: Started cri-containerd-ba741324739c54c6cd930434d56b27c475ff196a85483fa968eddcba633edc75.scope - libcontainer container ba741324739c54c6cd930434d56b27c475ff196a85483fa968eddcba633edc75. Jul 15 23:15:32.478842 containerd[1886]: time="2025-07-15T23:15:32.478800499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vfkqv,Uid:379e70c3-12ee-428a-a67d-5cf64f2aa25c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba741324739c54c6cd930434d56b27c475ff196a85483fa968eddcba633edc75\"" Jul 15 23:15:32.488223 containerd[1886]: time="2025-07-15T23:15:32.488028520Z" level=info msg="CreateContainer within sandbox \"ba741324739c54c6cd930434d56b27c475ff196a85483fa968eddcba633edc75\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 23:15:32.534907 containerd[1886]: time="2025-07-15T23:15:32.534857500Z" level=info msg="Container 4d49e88566c574051bb3a685459cd439934f194a4547bd21bfd3e5d684fd3d16: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:15:32.570506 containerd[1886]: time="2025-07-15T23:15:32.570462988Z" level=info msg="CreateContainer within sandbox \"ba741324739c54c6cd930434d56b27c475ff196a85483fa968eddcba633edc75\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4d49e88566c574051bb3a685459cd439934f194a4547bd21bfd3e5d684fd3d16\"" Jul 15 23:15:32.572422 containerd[1886]: time="2025-07-15T23:15:32.572382312Z" level=info msg="StartContainer for \"4d49e88566c574051bb3a685459cd439934f194a4547bd21bfd3e5d684fd3d16\"" Jul 15 23:15:32.573913 containerd[1886]: time="2025-07-15T23:15:32.573870465Z" level=info msg="connecting to shim 4d49e88566c574051bb3a685459cd439934f194a4547bd21bfd3e5d684fd3d16" address="unix:///run/containerd/s/365d3d27bce621ef32e896456ef6a2859369db84da26cfa4e4b825844646dfb6" protocol=ttrpc version=3 Jul 15 23:15:32.604541 systemd[1]: Started cri-containerd-4d49e88566c574051bb3a685459cd439934f194a4547bd21bfd3e5d684fd3d16.scope - libcontainer container 4d49e88566c574051bb3a685459cd439934f194a4547bd21bfd3e5d684fd3d16. Jul 15 23:15:32.670417 containerd[1886]: time="2025-07-15T23:15:32.669930858Z" level=info msg="StartContainer for \"4d49e88566c574051bb3a685459cd439934f194a4547bd21bfd3e5d684fd3d16\" returns successfully" Jul 15 23:15:32.954226 containerd[1886]: time="2025-07-15T23:15:32.954059775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nq752,Uid:a83216d8-36e6-4672-9453-104ead949734,Namespace:kube-system,Attempt:0,}" Jul 15 23:15:33.176650 kubelet[3483]: I0715 23:15:33.176163 3483 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-vfkqv" podStartSLOduration=37.176145918 podStartE2EDuration="37.176145918s" podCreationTimestamp="2025-07-15 23:14:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:15:33.161650609 +0000 UTC m=+44.289179669" watchObservedRunningTime="2025-07-15 23:15:33.176145918 +0000 UTC m=+44.303674978" Jul 15 23:15:33.201443 systemd-networkd[1577]: cali72ae6124c89: Gained IPv6LL Jul 15 23:15:33.491011 containerd[1886]: time="2025-07-15T23:15:33.490469102Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:15:33.496616 containerd[1886]: time="2025-07-15T23:15:33.496575934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 15 23:15:33.500891 containerd[1886]: time="2025-07-15T23:15:33.500847931Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:15:33.506872 containerd[1886]: time="2025-07-15T23:15:33.506082370Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:15:33.507665 containerd[1886]: time="2025-07-15T23:15:33.507614412Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 3.220078819s" Jul 15 23:15:33.507851 containerd[1886]: time="2025-07-15T23:15:33.507821626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 15 23:15:33.511069 containerd[1886]: time="2025-07-15T23:15:33.511034114Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 15 23:15:33.516500 containerd[1886]: time="2025-07-15T23:15:33.516459471Z" level=info msg="CreateContainer within sandbox \"94c192a90bc65fb721c59f4a6372aeaba504b9654db44d43a8b8aee27be828c0\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 15 23:15:33.538528 systemd-networkd[1577]: calied2cbfa1436: Link UP Jul 15 23:15:33.539451 systemd-networkd[1577]: calied2cbfa1436: Gained carrier Jul 15 23:15:33.550052 containerd[1886]: time="2025-07-15T23:15:33.549400542Z" level=info msg="Container e7d81d982050d35367d5f2968c40c19d75680e4448baf35b53a05ebab93e6c3d: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:15:33.551783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1666699607.mount: Deactivated successfully. Jul 15 23:15:33.568522 containerd[1886]: 2025-07-15 23:15:33.454 [INFO][5443] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--n--7d7ad51cdd-k8s-coredns--674b8bbfcf--nq752-eth0 coredns-674b8bbfcf- kube-system a83216d8-36e6-4672-9453-104ead949734 805 0 2025-07-15 23:14:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372.0.1-n-7d7ad51cdd coredns-674b8bbfcf-nq752 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calied2cbfa1436 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8ad822913ddad019e01a29da0d483a00b06f77f0711c0e4db28eeaed351956e6" Namespace="kube-system" Pod="coredns-674b8bbfcf-nq752" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-coredns--674b8bbfcf--nq752-" Jul 15 23:15:33.568522 containerd[1886]: 2025-07-15 23:15:33.454 [INFO][5443] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8ad822913ddad019e01a29da0d483a00b06f77f0711c0e4db28eeaed351956e6" Namespace="kube-system" Pod="coredns-674b8bbfcf-nq752" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-coredns--674b8bbfcf--nq752-eth0" Jul 15 23:15:33.568522 containerd[1886]: 2025-07-15 23:15:33.479 [INFO][5460] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8ad822913ddad019e01a29da0d483a00b06f77f0711c0e4db28eeaed351956e6" HandleID="k8s-pod-network.8ad822913ddad019e01a29da0d483a00b06f77f0711c0e4db28eeaed351956e6" Workload="ci--4372.0.1--n--7d7ad51cdd-k8s-coredns--674b8bbfcf--nq752-eth0" Jul 15 23:15:33.568522 containerd[1886]: 2025-07-15 23:15:33.480 [INFO][5460] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8ad822913ddad019e01a29da0d483a00b06f77f0711c0e4db28eeaed351956e6" HandleID="k8s-pod-network.8ad822913ddad019e01a29da0d483a00b06f77f0711c0e4db28eeaed351956e6" Workload="ci--4372.0.1--n--7d7ad51cdd-k8s-coredns--674b8bbfcf--nq752-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024ba20), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372.0.1-n-7d7ad51cdd", "pod":"coredns-674b8bbfcf-nq752", "timestamp":"2025-07-15 23:15:33.479944662 +0000 UTC"}, Hostname:"ci-4372.0.1-n-7d7ad51cdd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:15:33.568522 containerd[1886]: 2025-07-15 23:15:33.480 [INFO][5460] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:15:33.568522 containerd[1886]: 2025-07-15 23:15:33.480 [INFO][5460] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:15:33.568522 containerd[1886]: 2025-07-15 23:15:33.480 [INFO][5460] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-n-7d7ad51cdd' Jul 15 23:15:33.568522 containerd[1886]: 2025-07-15 23:15:33.488 [INFO][5460] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8ad822913ddad019e01a29da0d483a00b06f77f0711c0e4db28eeaed351956e6" host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:33.568522 containerd[1886]: 2025-07-15 23:15:33.495 [INFO][5460] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:33.568522 containerd[1886]: 2025-07-15 23:15:33.502 [INFO][5460] ipam/ipam.go 511: Trying affinity for 192.168.26.192/26 host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:33.568522 containerd[1886]: 2025-07-15 23:15:33.504 [INFO][5460] ipam/ipam.go 158: Attempting to load block cidr=192.168.26.192/26 host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:33.568522 containerd[1886]: 2025-07-15 23:15:33.507 [INFO][5460] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.26.192/26 host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:33.568522 containerd[1886]: 2025-07-15 23:15:33.507 [INFO][5460] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.26.192/26 handle="k8s-pod-network.8ad822913ddad019e01a29da0d483a00b06f77f0711c0e4db28eeaed351956e6" host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:33.568522 containerd[1886]: 2025-07-15 23:15:33.512 [INFO][5460] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8ad822913ddad019e01a29da0d483a00b06f77f0711c0e4db28eeaed351956e6 Jul 15 23:15:33.568522 containerd[1886]: 2025-07-15 23:15:33.518 [INFO][5460] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.26.192/26 handle="k8s-pod-network.8ad822913ddad019e01a29da0d483a00b06f77f0711c0e4db28eeaed351956e6" host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:33.568522 containerd[1886]: 2025-07-15 23:15:33.527 [INFO][5460] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.26.200/26] block=192.168.26.192/26 handle="k8s-pod-network.8ad822913ddad019e01a29da0d483a00b06f77f0711c0e4db28eeaed351956e6" host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:33.568522 containerd[1886]: 2025-07-15 23:15:33.527 [INFO][5460] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.26.200/26] handle="k8s-pod-network.8ad822913ddad019e01a29da0d483a00b06f77f0711c0e4db28eeaed351956e6" host="ci-4372.0.1-n-7d7ad51cdd" Jul 15 23:15:33.568522 containerd[1886]: 2025-07-15 23:15:33.527 [INFO][5460] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:15:33.568522 containerd[1886]: 2025-07-15 23:15:33.527 [INFO][5460] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.26.200/26] IPv6=[] ContainerID="8ad822913ddad019e01a29da0d483a00b06f77f0711c0e4db28eeaed351956e6" HandleID="k8s-pod-network.8ad822913ddad019e01a29da0d483a00b06f77f0711c0e4db28eeaed351956e6" Workload="ci--4372.0.1--n--7d7ad51cdd-k8s-coredns--674b8bbfcf--nq752-eth0" Jul 15 23:15:33.568948 containerd[1886]: 2025-07-15 23:15:33.529 [INFO][5443] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8ad822913ddad019e01a29da0d483a00b06f77f0711c0e4db28eeaed351956e6" Namespace="kube-system" Pod="coredns-674b8bbfcf-nq752" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-coredns--674b8bbfcf--nq752-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--7d7ad51cdd-k8s-coredns--674b8bbfcf--nq752-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a83216d8-36e6-4672-9453-104ead949734", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 14, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-7d7ad51cdd", ContainerID:"", Pod:"coredns-674b8bbfcf-nq752", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calied2cbfa1436", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:15:33.568948 containerd[1886]: 2025-07-15 23:15:33.529 [INFO][5443] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.200/32] ContainerID="8ad822913ddad019e01a29da0d483a00b06f77f0711c0e4db28eeaed351956e6" Namespace="kube-system" Pod="coredns-674b8bbfcf-nq752" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-coredns--674b8bbfcf--nq752-eth0" Jul 15 23:15:33.568948 containerd[1886]: 2025-07-15 23:15:33.529 [INFO][5443] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied2cbfa1436 ContainerID="8ad822913ddad019e01a29da0d483a00b06f77f0711c0e4db28eeaed351956e6" Namespace="kube-system" Pod="coredns-674b8bbfcf-nq752" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-coredns--674b8bbfcf--nq752-eth0" Jul 15 23:15:33.568948 containerd[1886]: 2025-07-15 23:15:33.539 [INFO][5443] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8ad822913ddad019e01a29da0d483a00b06f77f0711c0e4db28eeaed351956e6" Namespace="kube-system" Pod="coredns-674b8bbfcf-nq752" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-coredns--674b8bbfcf--nq752-eth0" Jul 15 23:15:33.568948 containerd[1886]: 2025-07-15 23:15:33.543 [INFO][5443] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8ad822913ddad019e01a29da0d483a00b06f77f0711c0e4db28eeaed351956e6" Namespace="kube-system" Pod="coredns-674b8bbfcf-nq752" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-coredns--674b8bbfcf--nq752-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--7d7ad51cdd-k8s-coredns--674b8bbfcf--nq752-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a83216d8-36e6-4672-9453-104ead949734", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 14, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-7d7ad51cdd", ContainerID:"8ad822913ddad019e01a29da0d483a00b06f77f0711c0e4db28eeaed351956e6", Pod:"coredns-674b8bbfcf-nq752", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calied2cbfa1436", MAC:"3a:f6:a8:e4:31:33", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:15:33.568948 containerd[1886]: 2025-07-15 23:15:33.562 [INFO][5443] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8ad822913ddad019e01a29da0d483a00b06f77f0711c0e4db28eeaed351956e6" Namespace="kube-system" Pod="coredns-674b8bbfcf-nq752" WorkloadEndpoint="ci--4372.0.1--n--7d7ad51cdd-k8s-coredns--674b8bbfcf--nq752-eth0" Jul 15 23:15:33.577542 containerd[1886]: time="2025-07-15T23:15:33.577241257Z" level=info msg="CreateContainer within sandbox \"94c192a90bc65fb721c59f4a6372aeaba504b9654db44d43a8b8aee27be828c0\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"e7d81d982050d35367d5f2968c40c19d75680e4448baf35b53a05ebab93e6c3d\"" Jul 15 23:15:33.580805 containerd[1886]: time="2025-07-15T23:15:33.579465246Z" level=info msg="StartContainer for \"e7d81d982050d35367d5f2968c40c19d75680e4448baf35b53a05ebab93e6c3d\"" Jul 15 23:15:33.581173 containerd[1886]: time="2025-07-15T23:15:33.581042361Z" level=info msg="connecting to shim e7d81d982050d35367d5f2968c40c19d75680e4448baf35b53a05ebab93e6c3d" address="unix:///run/containerd/s/b1054c80fc545e307a3c537b7cf06231b1f740e987b7e27d788d66b72355aab2" protocol=ttrpc version=3 Jul 15 23:15:33.605505 systemd[1]: Started cri-containerd-e7d81d982050d35367d5f2968c40c19d75680e4448baf35b53a05ebab93e6c3d.scope - libcontainer container e7d81d982050d35367d5f2968c40c19d75680e4448baf35b53a05ebab93e6c3d. Jul 15 23:15:33.635396 containerd[1886]: time="2025-07-15T23:15:33.635287184Z" level=info msg="connecting to shim 8ad822913ddad019e01a29da0d483a00b06f77f0711c0e4db28eeaed351956e6" address="unix:///run/containerd/s/c733b2d94ab01a20bc1fbf74d24ce192217d1783dc4a1daab65fcbb6b68084a2" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:15:33.645649 containerd[1886]: time="2025-07-15T23:15:33.645539521Z" level=info msg="StartContainer for \"e7d81d982050d35367d5f2968c40c19d75680e4448baf35b53a05ebab93e6c3d\" returns successfully" Jul 15 23:15:33.665495 systemd[1]: Started cri-containerd-8ad822913ddad019e01a29da0d483a00b06f77f0711c0e4db28eeaed351956e6.scope - libcontainer container 8ad822913ddad019e01a29da0d483a00b06f77f0711c0e4db28eeaed351956e6. Jul 15 23:15:33.713044 containerd[1886]: time="2025-07-15T23:15:33.712926328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nq752,Uid:a83216d8-36e6-4672-9453-104ead949734,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ad822913ddad019e01a29da0d483a00b06f77f0711c0e4db28eeaed351956e6\"" Jul 15 23:15:33.721400 containerd[1886]: time="2025-07-15T23:15:33.721339231Z" level=info msg="CreateContainer within sandbox \"8ad822913ddad019e01a29da0d483a00b06f77f0711c0e4db28eeaed351956e6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 23:15:33.751603 containerd[1886]: time="2025-07-15T23:15:33.750914889Z" level=info msg="Container 01c45df64175f1b848fab99141bb8fe27fd8aa9dc44bce0d6d4569c21c9e43b4: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:15:33.775016 containerd[1886]: time="2025-07-15T23:15:33.774892008Z" level=info msg="CreateContainer within sandbox \"8ad822913ddad019e01a29da0d483a00b06f77f0711c0e4db28eeaed351956e6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"01c45df64175f1b848fab99141bb8fe27fd8aa9dc44bce0d6d4569c21c9e43b4\"" Jul 15 23:15:33.776310 containerd[1886]: time="2025-07-15T23:15:33.775969302Z" level=info msg="StartContainer for \"01c45df64175f1b848fab99141bb8fe27fd8aa9dc44bce0d6d4569c21c9e43b4\"" Jul 15 23:15:33.777753 systemd-networkd[1577]: caliefe8b4373bd: Gained IPv6LL Jul 15 23:15:33.778106 containerd[1886]: time="2025-07-15T23:15:33.778075207Z" level=info msg="connecting to shim 01c45df64175f1b848fab99141bb8fe27fd8aa9dc44bce0d6d4569c21c9e43b4" address="unix:///run/containerd/s/c733b2d94ab01a20bc1fbf74d24ce192217d1783dc4a1daab65fcbb6b68084a2" protocol=ttrpc version=3 Jul 15 23:15:33.795449 systemd[1]: Started cri-containerd-01c45df64175f1b848fab99141bb8fe27fd8aa9dc44bce0d6d4569c21c9e43b4.scope - libcontainer container 01c45df64175f1b848fab99141bb8fe27fd8aa9dc44bce0d6d4569c21c9e43b4. Jul 15 23:15:33.898252 containerd[1886]: time="2025-07-15T23:15:33.898140258Z" level=info msg="StartContainer for \"01c45df64175f1b848fab99141bb8fe27fd8aa9dc44bce0d6d4569c21c9e43b4\" returns successfully" Jul 15 23:15:34.172312 kubelet[3483]: I0715 23:15:34.171890 3483 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-nq752" podStartSLOduration=37.171874221 podStartE2EDuration="37.171874221s" podCreationTimestamp="2025-07-15 23:14:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:15:34.171581141 +0000 UTC m=+45.299110201" watchObservedRunningTime="2025-07-15 23:15:34.171874221 +0000 UTC m=+45.299403281" Jul 15 23:15:34.199966 kubelet[3483]: I0715 23:15:34.199849 3483 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-97d7l" podStartSLOduration=21.977588419 podStartE2EDuration="25.19982973s" podCreationTimestamp="2025-07-15 23:15:09 +0000 UTC" firstStartedPulling="2025-07-15 23:15:30.286744635 +0000 UTC m=+41.414273695" lastFinishedPulling="2025-07-15 23:15:33.508985946 +0000 UTC m=+44.636515006" observedRunningTime="2025-07-15 23:15:34.198803414 +0000 UTC m=+45.326332482" watchObservedRunningTime="2025-07-15 23:15:34.19982973 +0000 UTC m=+45.327358798" Jul 15 23:15:34.489361 containerd[1886]: time="2025-07-15T23:15:34.489306852Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e7d81d982050d35367d5f2968c40c19d75680e4448baf35b53a05ebab93e6c3d\" id:\"75b32a6b2cc92a79b31a25016ad013bf0f7bfea2b4b78da4ce7d0628a2ab61d9\" pid:5603 exit_status:1 exited_at:{seconds:1752621334 nanos:488936842}" Jul 15 23:15:35.057438 systemd-networkd[1577]: calied2cbfa1436: Gained IPv6LL Jul 15 23:15:35.230630 containerd[1886]: time="2025-07-15T23:15:35.230586999Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e7d81d982050d35367d5f2968c40c19d75680e4448baf35b53a05ebab93e6c3d\" id:\"d3c017a0a40f1a61770a0ac000851e22edb855c9233d5e2260662aeca2f22074\" pid:5629 exit_status:1 exited_at:{seconds:1752621335 nanos:227446361}" Jul 15 23:15:36.004110 containerd[1886]: time="2025-07-15T23:15:36.004057858Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:15:36.007590 containerd[1886]: time="2025-07-15T23:15:36.007554146Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 15 23:15:36.013754 containerd[1886]: time="2025-07-15T23:15:36.013724434Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:15:36.019054 containerd[1886]: time="2025-07-15T23:15:36.019021067Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:15:36.019848 containerd[1886]: time="2025-07-15T23:15:36.019816817Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 2.50874495s" Jul 15 23:15:36.019848 containerd[1886]: time="2025-07-15T23:15:36.019849098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 15 23:15:36.021065 containerd[1886]: time="2025-07-15T23:15:36.020688857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 15 23:15:36.051546 containerd[1886]: time="2025-07-15T23:15:36.051497267Z" level=info msg="CreateContainer within sandbox \"6ecf48bb24d0f0e547f9cb7c1baabae716b1eeba3698ffd843609617369b8605\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 15 23:15:36.089350 containerd[1886]: time="2025-07-15T23:15:36.089306461Z" level=info msg="Container 5dd67593d0448dd8ebfcec934764e23b9aee85783602f73485ec8b83052abdc1: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:15:36.109806 containerd[1886]: time="2025-07-15T23:15:36.109754556Z" level=info msg="CreateContainer within sandbox \"6ecf48bb24d0f0e547f9cb7c1baabae716b1eeba3698ffd843609617369b8605\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5dd67593d0448dd8ebfcec934764e23b9aee85783602f73485ec8b83052abdc1\"" Jul 15 23:15:36.110966 containerd[1886]: time="2025-07-15T23:15:36.110637172Z" level=info msg="StartContainer for \"5dd67593d0448dd8ebfcec934764e23b9aee85783602f73485ec8b83052abdc1\"" Jul 15 23:15:36.113043 containerd[1886]: time="2025-07-15T23:15:36.113010789Z" level=info msg="connecting to shim 5dd67593d0448dd8ebfcec934764e23b9aee85783602f73485ec8b83052abdc1" address="unix:///run/containerd/s/6f92796652fa7b8782c22b69c62ff447bf1382ecbcfe703888ce90e712944b7d" protocol=ttrpc version=3 Jul 15 23:15:36.155466 systemd[1]: Started cri-containerd-5dd67593d0448dd8ebfcec934764e23b9aee85783602f73485ec8b83052abdc1.scope - libcontainer container 5dd67593d0448dd8ebfcec934764e23b9aee85783602f73485ec8b83052abdc1. Jul 15 23:15:36.289655 containerd[1886]: time="2025-07-15T23:15:36.289502030Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e7d81d982050d35367d5f2968c40c19d75680e4448baf35b53a05ebab93e6c3d\" id:\"9d20432e63fc3aef9cd4cac3cac7f9c9fa61613ee03c2b57b686a45bc1908c44\" pid:5679 exit_status:1 exited_at:{seconds:1752621336 nanos:287463654}" Jul 15 23:15:36.424464 containerd[1886]: time="2025-07-15T23:15:36.424414382Z" level=info msg="StartContainer for \"5dd67593d0448dd8ebfcec934764e23b9aee85783602f73485ec8b83052abdc1\" returns successfully" Jul 15 23:15:37.184143 kubelet[3483]: I0715 23:15:37.183901 3483 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-69f8c8b5f9-754zd" podStartSLOduration=22.543042573 podStartE2EDuration="28.183885051s" podCreationTimestamp="2025-07-15 23:15:09 +0000 UTC" firstStartedPulling="2025-07-15 23:15:30.379695783 +0000 UTC m=+41.507224843" lastFinishedPulling="2025-07-15 23:15:36.020538261 +0000 UTC m=+47.148067321" observedRunningTime="2025-07-15 23:15:37.18318148 +0000 UTC m=+48.310710612" watchObservedRunningTime="2025-07-15 23:15:37.183885051 +0000 UTC m=+48.311414111" Jul 15 23:15:37.361136 containerd[1886]: time="2025-07-15T23:15:37.361080040Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:15:37.363438 containerd[1886]: time="2025-07-15T23:15:37.363389743Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 15 23:15:37.370226 containerd[1886]: time="2025-07-15T23:15:37.370002476Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:15:37.376625 containerd[1886]: time="2025-07-15T23:15:37.376580223Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:15:37.376925 containerd[1886]: time="2025-07-15T23:15:37.376898832Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.356183687s" Jul 15 23:15:37.376977 containerd[1886]: time="2025-07-15T23:15:37.376927409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 15 23:15:37.385021 containerd[1886]: time="2025-07-15T23:15:37.384980421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 23:15:37.386341 containerd[1886]: time="2025-07-15T23:15:37.385617535Z" level=info msg="CreateContainer within sandbox \"b6652ef0e0a98625ccca42d3460f732480eee4d12666c1cb6e0727ca3ccc4a8f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 15 23:15:37.432781 containerd[1886]: time="2025-07-15T23:15:37.432499104Z" level=info msg="Container f77f4c9e601ee53bbf202f08e4d87ea561b1038cc6b39918a984479a3b5d421b: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:15:37.457483 containerd[1886]: time="2025-07-15T23:15:37.456991358Z" level=info msg="CreateContainer within sandbox \"b6652ef0e0a98625ccca42d3460f732480eee4d12666c1cb6e0727ca3ccc4a8f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f77f4c9e601ee53bbf202f08e4d87ea561b1038cc6b39918a984479a3b5d421b\"" Jul 15 23:15:37.458353 containerd[1886]: time="2025-07-15T23:15:37.458318370Z" level=info msg="StartContainer for \"f77f4c9e601ee53bbf202f08e4d87ea561b1038cc6b39918a984479a3b5d421b\"" Jul 15 23:15:37.460734 containerd[1886]: time="2025-07-15T23:15:37.460709852Z" level=info msg="connecting to shim f77f4c9e601ee53bbf202f08e4d87ea561b1038cc6b39918a984479a3b5d421b" address="unix:///run/containerd/s/903513f56735bba147d9676d7e40bdaa5836a0b40683a4c5a2aa2ca9954cf560" protocol=ttrpc version=3 Jul 15 23:15:37.494149 systemd[1]: Started cri-containerd-f77f4c9e601ee53bbf202f08e4d87ea561b1038cc6b39918a984479a3b5d421b.scope - libcontainer container f77f4c9e601ee53bbf202f08e4d87ea561b1038cc6b39918a984479a3b5d421b. Jul 15 23:15:37.547801 containerd[1886]: time="2025-07-15T23:15:37.547678653Z" level=info msg="StartContainer for \"f77f4c9e601ee53bbf202f08e4d87ea561b1038cc6b39918a984479a3b5d421b\" returns successfully" Jul 15 23:15:38.170381 kubelet[3483]: I0715 23:15:38.170346 3483 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:15:41.485709 containerd[1886]: time="2025-07-15T23:15:41.485649966Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:15:41.488108 containerd[1886]: time="2025-07-15T23:15:41.487961213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 15 23:15:41.493153 containerd[1886]: time="2025-07-15T23:15:41.493122066Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:15:41.497085 containerd[1886]: time="2025-07-15T23:15:41.497037413Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:15:41.497556 containerd[1886]: time="2025-07-15T23:15:41.497382375Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 4.112366449s" Jul 15 23:15:41.497556 containerd[1886]: time="2025-07-15T23:15:41.497409760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 15 23:15:41.498423 containerd[1886]: time="2025-07-15T23:15:41.498402667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 23:15:41.506129 containerd[1886]: time="2025-07-15T23:15:41.505727795Z" level=info msg="CreateContainer within sandbox \"6aa7289486f363b591223cf97e314be73682d53ec3a2d599b4e331b30068026e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 23:15:41.548078 containerd[1886]: time="2025-07-15T23:15:41.548030551Z" level=info msg="Container d011023ca8fbd3de329ab1eb5310496f3e7ee120a8487e8ee478cb1cb761838d: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:15:41.570209 containerd[1886]: time="2025-07-15T23:15:41.570166293Z" level=info msg="CreateContainer within sandbox \"6aa7289486f363b591223cf97e314be73682d53ec3a2d599b4e331b30068026e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d011023ca8fbd3de329ab1eb5310496f3e7ee120a8487e8ee478cb1cb761838d\"" Jul 15 23:15:41.570895 containerd[1886]: time="2025-07-15T23:15:41.570690139Z" level=info msg="StartContainer for \"d011023ca8fbd3de329ab1eb5310496f3e7ee120a8487e8ee478cb1cb761838d\"" Jul 15 23:15:41.571777 containerd[1886]: time="2025-07-15T23:15:41.571748912Z" level=info msg="connecting to shim d011023ca8fbd3de329ab1eb5310496f3e7ee120a8487e8ee478cb1cb761838d" address="unix:///run/containerd/s/8c89b0b7c6c650b185aee1dd7ca1e17198dd6c3572996affbc8019dde97576e8" protocol=ttrpc version=3 Jul 15 23:15:41.592443 systemd[1]: Started cri-containerd-d011023ca8fbd3de329ab1eb5310496f3e7ee120a8487e8ee478cb1cb761838d.scope - libcontainer container d011023ca8fbd3de329ab1eb5310496f3e7ee120a8487e8ee478cb1cb761838d. Jul 15 23:15:41.630120 containerd[1886]: time="2025-07-15T23:15:41.630086659Z" level=info msg="StartContainer for \"d011023ca8fbd3de329ab1eb5310496f3e7ee120a8487e8ee478cb1cb761838d\" returns successfully" Jul 15 23:15:41.852381 containerd[1886]: time="2025-07-15T23:15:41.852151449Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:15:41.862113 containerd[1886]: time="2025-07-15T23:15:41.862051361Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 15 23:15:41.863299 containerd[1886]: time="2025-07-15T23:15:41.863213785Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 364.789046ms" Jul 15 23:15:41.863428 containerd[1886]: time="2025-07-15T23:15:41.863411238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 15 23:15:41.867324 containerd[1886]: time="2025-07-15T23:15:41.867211727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 15 23:15:41.876663 containerd[1886]: time="2025-07-15T23:15:41.875797059Z" level=info msg="CreateContainer within sandbox \"400ee731c2522f28237cd0ec3fefa634eba4b51c31633f47b4fb986e4a253ed8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 23:15:41.921450 containerd[1886]: time="2025-07-15T23:15:41.921150497Z" level=info msg="Container c933121e257f0f09f006511abe19707bb002e80b1db802c5d53c18135ae11d35: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:15:41.949110 containerd[1886]: time="2025-07-15T23:15:41.949065608Z" level=info msg="CreateContainer within sandbox \"400ee731c2522f28237cd0ec3fefa634eba4b51c31633f47b4fb986e4a253ed8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c933121e257f0f09f006511abe19707bb002e80b1db802c5d53c18135ae11d35\"" Jul 15 23:15:41.951271 containerd[1886]: time="2025-07-15T23:15:41.951238796Z" level=info msg="StartContainer for \"c933121e257f0f09f006511abe19707bb002e80b1db802c5d53c18135ae11d35\"" Jul 15 23:15:41.952165 containerd[1886]: time="2025-07-15T23:15:41.952139036Z" level=info msg="connecting to shim c933121e257f0f09f006511abe19707bb002e80b1db802c5d53c18135ae11d35" address="unix:///run/containerd/s/4fadef8804df5d0167cb8608477d72a69346f2cf1b8cbe3d67fe0a21153152c4" protocol=ttrpc version=3 Jul 15 23:15:41.974491 systemd[1]: Started cri-containerd-c933121e257f0f09f006511abe19707bb002e80b1db802c5d53c18135ae11d35.scope - libcontainer container c933121e257f0f09f006511abe19707bb002e80b1db802c5d53c18135ae11d35. Jul 15 23:15:42.010513 containerd[1886]: time="2025-07-15T23:15:42.010473127Z" level=info msg="StartContainer for \"c933121e257f0f09f006511abe19707bb002e80b1db802c5d53c18135ae11d35\" returns successfully" Jul 15 23:15:42.235266 kubelet[3483]: I0715 23:15:42.235126 3483 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5684c645d7-ljgbp" podStartSLOduration=27.729033167 podStartE2EDuration="37.235017673s" podCreationTimestamp="2025-07-15 23:15:05 +0000 UTC" firstStartedPulling="2025-07-15 23:15:32.359730019 +0000 UTC m=+43.487259079" lastFinishedPulling="2025-07-15 23:15:41.865714525 +0000 UTC m=+52.993243585" observedRunningTime="2025-07-15 23:15:42.212111684 +0000 UTC m=+53.339640752" watchObservedRunningTime="2025-07-15 23:15:42.235017673 +0000 UTC m=+53.362546733" Jul 15 23:15:42.236586 kubelet[3483]: I0715 23:15:42.235919 3483 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5684c645d7-7hkgz" podStartSLOduration=26.932436754 podStartE2EDuration="37.235907786s" podCreationTimestamp="2025-07-15 23:15:05 +0000 UTC" firstStartedPulling="2025-07-15 23:15:31.194740933 +0000 UTC m=+42.322269993" lastFinishedPulling="2025-07-15 23:15:41.498211957 +0000 UTC m=+52.625741025" observedRunningTime="2025-07-15 23:15:42.233259681 +0000 UTC m=+53.360788749" watchObservedRunningTime="2025-07-15 23:15:42.235907786 +0000 UTC m=+53.363437022" Jul 15 23:15:43.190996 kubelet[3483]: I0715 23:15:43.190388 3483 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:15:44.012961 containerd[1886]: time="2025-07-15T23:15:44.012903222Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:15:44.016066 containerd[1886]: time="2025-07-15T23:15:44.015883928Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 15 23:15:44.020341 containerd[1886]: time="2025-07-15T23:15:44.020200175Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:15:44.024700 containerd[1886]: time="2025-07-15T23:15:44.024649513Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:15:44.025181 containerd[1886]: time="2025-07-15T23:15:44.025156871Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 2.157917248s" Jul 15 23:15:44.025270 containerd[1886]: time="2025-07-15T23:15:44.025257394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 15 23:15:44.037155 containerd[1886]: time="2025-07-15T23:15:44.037120015Z" level=info msg="CreateContainer within sandbox \"b6652ef0e0a98625ccca42d3460f732480eee4d12666c1cb6e0727ca3ccc4a8f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 15 23:15:44.083031 containerd[1886]: time="2025-07-15T23:15:44.082686820Z" level=info msg="Container f7a68a47ebf5d1f107d7ea0a5189b0fea7c62b3ab8fec510d5efb9fbcaef11da: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:15:44.109876 containerd[1886]: time="2025-07-15T23:15:44.109805437Z" level=info msg="CreateContainer within sandbox \"b6652ef0e0a98625ccca42d3460f732480eee4d12666c1cb6e0727ca3ccc4a8f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f7a68a47ebf5d1f107d7ea0a5189b0fea7c62b3ab8fec510d5efb9fbcaef11da\"" Jul 15 23:15:44.112520 containerd[1886]: time="2025-07-15T23:15:44.111659608Z" level=info msg="StartContainer for \"f7a68a47ebf5d1f107d7ea0a5189b0fea7c62b3ab8fec510d5efb9fbcaef11da\"" Jul 15 23:15:44.116203 containerd[1886]: time="2025-07-15T23:15:44.116171796Z" level=info msg="connecting to shim f7a68a47ebf5d1f107d7ea0a5189b0fea7c62b3ab8fec510d5efb9fbcaef11da" address="unix:///run/containerd/s/903513f56735bba147d9676d7e40bdaa5836a0b40683a4c5a2aa2ca9954cf560" protocol=ttrpc version=3 Jul 15 23:15:44.139438 systemd[1]: Started cri-containerd-f7a68a47ebf5d1f107d7ea0a5189b0fea7c62b3ab8fec510d5efb9fbcaef11da.scope - libcontainer container f7a68a47ebf5d1f107d7ea0a5189b0fea7c62b3ab8fec510d5efb9fbcaef11da. Jul 15 23:15:44.176295 containerd[1886]: time="2025-07-15T23:15:44.176119403Z" level=info msg="StartContainer for \"f7a68a47ebf5d1f107d7ea0a5189b0fea7c62b3ab8fec510d5efb9fbcaef11da\" returns successfully" Jul 15 23:15:44.214333 kubelet[3483]: I0715 23:15:44.214205 3483 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-dhhj5" podStartSLOduration=21.635702046 podStartE2EDuration="35.214186881s" podCreationTimestamp="2025-07-15 23:15:09 +0000 UTC" firstStartedPulling="2025-07-15 23:15:30.449343261 +0000 UTC m=+41.576872321" lastFinishedPulling="2025-07-15 23:15:44.027828088 +0000 UTC m=+55.155357156" observedRunningTime="2025-07-15 23:15:44.213391907 +0000 UTC m=+55.340921007" watchObservedRunningTime="2025-07-15 23:15:44.214186881 +0000 UTC m=+55.341715941" Jul 15 23:15:45.063471 kubelet[3483]: I0715 23:15:45.063430 3483 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 15 23:15:45.065763 kubelet[3483]: I0715 23:15:45.065741 3483 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 15 23:15:47.625194 kubelet[3483]: I0715 23:15:47.624793 3483 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:15:47.695964 containerd[1886]: time="2025-07-15T23:15:47.695923042Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5dd67593d0448dd8ebfcec934764e23b9aee85783602f73485ec8b83052abdc1\" id:\"5d1299a387b90d0865ca0affaa2c4ce320b9bc1ed098dd100800d8a5889d643a\" pid:5896 exited_at:{seconds:1752621347 nanos:695621786}" Jul 15 23:15:47.741790 containerd[1886]: time="2025-07-15T23:15:47.741745693Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5dd67593d0448dd8ebfcec934764e23b9aee85783602f73485ec8b83052abdc1\" id:\"2f9c5ab0cd0e0ee16a5d9d793ffbf9f4885f698a9d5395a5b3599913d623dca8\" pid:5918 exited_at:{seconds:1752621347 nanos:741283992}" Jul 15 23:15:54.149365 containerd[1886]: time="2025-07-15T23:15:54.149314748Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e59d943a027a624d171db33e28ef2145c77369c57294f4af647a3df75f8627dc\" id:\"9f9daabb00d9af2a6011a3623b0d4005893efc16f71e544bcaeea1f6ef41f1a4\" pid:5949 exited_at:{seconds:1752621354 nanos:148985324}" Jul 15 23:15:54.230257 containerd[1886]: time="2025-07-15T23:15:54.230127577Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e59d943a027a624d171db33e28ef2145c77369c57294f4af647a3df75f8627dc\" id:\"50b96ca35743febb49dc032e551ecd1a049600fd426a44fdff591bc4b6928fc6\" pid:5976 exited_at:{seconds:1752621354 nanos:229812073}" Jul 15 23:15:57.619773 containerd[1886]: time="2025-07-15T23:15:57.619736532Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e7d81d982050d35367d5f2968c40c19d75680e4448baf35b53a05ebab93e6c3d\" id:\"74be00160042d40616de40bba0f77f55960d903ede168b642a67739845cd0e16\" pid:6003 exited_at:{seconds:1752621357 nanos:619432499}" Jul 15 23:15:59.159545 containerd[1886]: time="2025-07-15T23:15:59.159499549Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5dd67593d0448dd8ebfcec934764e23b9aee85783602f73485ec8b83052abdc1\" id:\"8b30e2b8ba5c88a7d255637f9792922bee099395daf4bb57bdf1b7a81d2152f0\" pid:6025 exited_at:{seconds:1752621359 nanos:159285079}" Jul 15 23:16:06.186247 kubelet[3483]: I0715 23:16:06.186206 3483 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:16:06.241425 containerd[1886]: time="2025-07-15T23:16:06.240681083Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e7d81d982050d35367d5f2968c40c19d75680e4448baf35b53a05ebab93e6c3d\" id:\"8b24664a031b4d218a0fbf23830942f811cef44a647ffdd224bc5d208474d508\" pid:6046 exited_at:{seconds:1752621366 nanos:240399740}" Jul 15 23:16:17.726336 containerd[1886]: time="2025-07-15T23:16:17.726059106Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5dd67593d0448dd8ebfcec934764e23b9aee85783602f73485ec8b83052abdc1\" id:\"01af73b3d56dc33a138719bdcad57512521aea01add9aa2dd5480cab6eec1a22\" pid:6081 exited_at:{seconds:1752621377 nanos:725864909}" Jul 15 23:16:24.214990 containerd[1886]: time="2025-07-15T23:16:24.214932882Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e59d943a027a624d171db33e28ef2145c77369c57294f4af647a3df75f8627dc\" id:\"04b58dcc83b4a0c2b541ec7299a50f27c38b4dfbdc55835cd76d8fe14b27c843\" pid:6102 exited_at:{seconds:1752621384 nanos:214430693}" Jul 15 23:16:36.220832 containerd[1886]: time="2025-07-15T23:16:36.220781279Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e7d81d982050d35367d5f2968c40c19d75680e4448baf35b53a05ebab93e6c3d\" id:\"3e4154d3c2023879125bbbd0ed61d36e4e6527bffc024eb6b0969fd9727e3a2e\" pid:6131 exited_at:{seconds:1752621396 nanos:220444734}" Jul 15 23:16:39.227045 systemd[1]: Started sshd@7-10.200.20.18:22-10.200.16.10:40802.service - OpenSSH per-connection server daemon (10.200.16.10:40802). Jul 15 23:16:39.698721 sshd[6146]: Accepted publickey for core from 10.200.16.10 port 40802 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:16:39.698165 sshd-session[6146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:16:39.703355 systemd-logind[1862]: New session 10 of user core. Jul 15 23:16:39.706397 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 15 23:16:40.119453 sshd[6148]: Connection closed by 10.200.16.10 port 40802 Jul 15 23:16:40.119257 sshd-session[6146]: pam_unix(sshd:session): session closed for user core Jul 15 23:16:40.123396 systemd[1]: sshd@7-10.200.20.18:22-10.200.16.10:40802.service: Deactivated successfully. Jul 15 23:16:40.126088 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 23:16:40.127524 systemd-logind[1862]: Session 10 logged out. Waiting for processes to exit. Jul 15 23:16:40.129620 systemd-logind[1862]: Removed session 10. Jul 15 23:16:45.199474 systemd[1]: Started sshd@8-10.200.20.18:22-10.200.16.10:41678.service - OpenSSH per-connection server daemon (10.200.16.10:41678). Jul 15 23:16:45.634830 sshd[6162]: Accepted publickey for core from 10.200.16.10 port 41678 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:16:45.636798 sshd-session[6162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:16:45.641289 systemd-logind[1862]: New session 11 of user core. Jul 15 23:16:45.648439 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 15 23:16:46.016984 sshd[6164]: Connection closed by 10.200.16.10 port 41678 Jul 15 23:16:46.018513 sshd-session[6162]: pam_unix(sshd:session): session closed for user core Jul 15 23:16:46.022361 systemd[1]: sshd@8-10.200.20.18:22-10.200.16.10:41678.service: Deactivated successfully. Jul 15 23:16:46.026994 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 23:16:46.030127 systemd-logind[1862]: Session 11 logged out. Waiting for processes to exit. Jul 15 23:16:46.033189 systemd-logind[1862]: Removed session 11. Jul 15 23:16:47.730665 containerd[1886]: time="2025-07-15T23:16:47.730615605Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5dd67593d0448dd8ebfcec934764e23b9aee85783602f73485ec8b83052abdc1\" id:\"af34a6bf1a98b7323816f9ac8d73d98858fed5bf06df1b307b745668eb6af4ad\" pid:6188 exited_at:{seconds:1752621407 nanos:730333709}" Jul 15 23:16:51.102055 systemd[1]: Started sshd@9-10.200.20.18:22-10.200.16.10:47694.service - OpenSSH per-connection server daemon (10.200.16.10:47694). Jul 15 23:16:51.564073 sshd[6207]: Accepted publickey for core from 10.200.16.10 port 47694 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:16:51.565538 sshd-session[6207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:16:51.571357 systemd-logind[1862]: New session 12 of user core. Jul 15 23:16:51.575434 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 15 23:16:51.938058 sshd[6209]: Connection closed by 10.200.16.10 port 47694 Jul 15 23:16:51.937302 sshd-session[6207]: pam_unix(sshd:session): session closed for user core Jul 15 23:16:51.940920 systemd-logind[1862]: Session 12 logged out. Waiting for processes to exit. Jul 15 23:16:51.941477 systemd[1]: sshd@9-10.200.20.18:22-10.200.16.10:47694.service: Deactivated successfully. Jul 15 23:16:51.943700 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 23:16:51.945868 systemd-logind[1862]: Removed session 12. Jul 15 23:16:52.030774 systemd[1]: Started sshd@10-10.200.20.18:22-10.200.16.10:47710.service - OpenSSH per-connection server daemon (10.200.16.10:47710). Jul 15 23:16:52.523791 sshd[6222]: Accepted publickey for core from 10.200.16.10 port 47710 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:16:52.525051 sshd-session[6222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:16:52.529034 systemd-logind[1862]: New session 13 of user core. Jul 15 23:16:52.539591 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 15 23:16:52.956910 sshd[6224]: Connection closed by 10.200.16.10 port 47710 Jul 15 23:16:52.957671 sshd-session[6222]: pam_unix(sshd:session): session closed for user core Jul 15 23:16:52.961424 systemd[1]: sshd@10-10.200.20.18:22-10.200.16.10:47710.service: Deactivated successfully. Jul 15 23:16:52.963747 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 23:16:52.965189 systemd-logind[1862]: Session 13 logged out. Waiting for processes to exit. Jul 15 23:16:52.967088 systemd-logind[1862]: Removed session 13. Jul 15 23:16:53.040498 systemd[1]: Started sshd@11-10.200.20.18:22-10.200.16.10:47718.service - OpenSSH per-connection server daemon (10.200.16.10:47718). Jul 15 23:16:53.498640 sshd[6234]: Accepted publickey for core from 10.200.16.10 port 47718 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:16:53.499815 sshd-session[6234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:16:53.503469 systemd-logind[1862]: New session 14 of user core. Jul 15 23:16:53.509516 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 15 23:16:53.880075 sshd[6236]: Connection closed by 10.200.16.10 port 47718 Jul 15 23:16:53.879505 sshd-session[6234]: pam_unix(sshd:session): session closed for user core Jul 15 23:16:53.882298 systemd[1]: sshd@11-10.200.20.18:22-10.200.16.10:47718.service: Deactivated successfully. Jul 15 23:16:53.884014 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 23:16:53.885223 systemd-logind[1862]: Session 14 logged out. Waiting for processes to exit. Jul 15 23:16:53.886714 systemd-logind[1862]: Removed session 14. Jul 15 23:16:54.209788 containerd[1886]: time="2025-07-15T23:16:54.209668025Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e59d943a027a624d171db33e28ef2145c77369c57294f4af647a3df75f8627dc\" id:\"9c05f58ca57f4717c6b3c1c34c2754a49d96a4ae142fe7939a1ad897cd2085e7\" pid:6262 exited_at:{seconds:1752621414 nanos:209239014}" Jul 15 23:16:57.596480 containerd[1886]: time="2025-07-15T23:16:57.596402913Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e7d81d982050d35367d5f2968c40c19d75680e4448baf35b53a05ebab93e6c3d\" id:\"163fd834def0f440a0ca07b70ee1692da1265863230329c1703ee6eddaa5ed74\" pid:6291 exited_at:{seconds:1752621417 nanos:595573739}" Jul 15 23:16:58.980973 systemd[1]: Started sshd@12-10.200.20.18:22-10.200.16.10:47724.service - OpenSSH per-connection server daemon (10.200.16.10:47724). Jul 15 23:16:59.152363 containerd[1886]: time="2025-07-15T23:16:59.152320825Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5dd67593d0448dd8ebfcec934764e23b9aee85783602f73485ec8b83052abdc1\" id:\"aae3deebeb7e1fe80e0682fab05e8f9b291ef7a8f0144dfafc3bb88233b3722d\" pid:6315 exited_at:{seconds:1752621419 nanos:151916991}" Jul 15 23:16:59.436138 sshd[6301]: Accepted publickey for core from 10.200.16.10 port 47724 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:16:59.437457 sshd-session[6301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:16:59.441353 systemd-logind[1862]: New session 15 of user core. Jul 15 23:16:59.447417 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 15 23:16:59.823107 sshd[6325]: Connection closed by 10.200.16.10 port 47724 Jul 15 23:16:59.823723 sshd-session[6301]: pam_unix(sshd:session): session closed for user core Jul 15 23:16:59.827555 systemd[1]: sshd@12-10.200.20.18:22-10.200.16.10:47724.service: Deactivated successfully. Jul 15 23:16:59.827556 systemd-logind[1862]: Session 15 logged out. Waiting for processes to exit. Jul 15 23:16:59.829792 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 23:16:59.831824 systemd-logind[1862]: Removed session 15. Jul 15 23:17:04.902615 systemd[1]: Started sshd@13-10.200.20.18:22-10.200.16.10:37234.service - OpenSSH per-connection server daemon (10.200.16.10:37234). Jul 15 23:17:05.364211 sshd[6339]: Accepted publickey for core from 10.200.16.10 port 37234 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:17:05.365513 sshd-session[6339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:17:05.369958 systemd-logind[1862]: New session 16 of user core. Jul 15 23:17:05.375447 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 15 23:17:05.736762 sshd[6355]: Connection closed by 10.200.16.10 port 37234 Jul 15 23:17:05.737373 sshd-session[6339]: pam_unix(sshd:session): session closed for user core Jul 15 23:17:05.740649 systemd[1]: sshd@13-10.200.20.18:22-10.200.16.10:37234.service: Deactivated successfully. Jul 15 23:17:05.742455 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 23:17:05.743199 systemd-logind[1862]: Session 16 logged out. Waiting for processes to exit. Jul 15 23:17:05.744704 systemd-logind[1862]: Removed session 16. Jul 15 23:17:06.226551 containerd[1886]: time="2025-07-15T23:17:06.226422782Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e7d81d982050d35367d5f2968c40c19d75680e4448baf35b53a05ebab93e6c3d\" id:\"c9d848f66ae853f2f14144d9ec49db28aa3a084deeb59208f91c9d2f0186a2a7\" pid:6386 exited_at:{seconds:1752621426 nanos:225903616}" Jul 15 23:17:10.826795 systemd[1]: Started sshd@14-10.200.20.18:22-10.200.16.10:48424.service - OpenSSH per-connection server daemon (10.200.16.10:48424). Jul 15 23:17:11.256763 sshd[6396]: Accepted publickey for core from 10.200.16.10 port 48424 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:17:11.257937 sshd-session[6396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:17:11.262120 systemd-logind[1862]: New session 17 of user core. Jul 15 23:17:11.268626 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 15 23:17:11.632256 sshd[6398]: Connection closed by 10.200.16.10 port 48424 Jul 15 23:17:11.631393 sshd-session[6396]: pam_unix(sshd:session): session closed for user core Jul 15 23:17:11.634428 systemd-logind[1862]: Session 17 logged out. Waiting for processes to exit. Jul 15 23:17:11.635956 systemd[1]: sshd@14-10.200.20.18:22-10.200.16.10:48424.service: Deactivated successfully. Jul 15 23:17:11.637967 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 23:17:11.639861 systemd-logind[1862]: Removed session 17. Jul 15 23:17:11.708744 systemd[1]: Started sshd@15-10.200.20.18:22-10.200.16.10:48432.service - OpenSSH per-connection server daemon (10.200.16.10:48432). Jul 15 23:17:12.142413 sshd[6409]: Accepted publickey for core from 10.200.16.10 port 48432 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:17:12.143579 sshd-session[6409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:17:12.147483 systemd-logind[1862]: New session 18 of user core. Jul 15 23:17:12.154431 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 15 23:17:12.587953 sshd[6411]: Connection closed by 10.200.16.10 port 48432 Jul 15 23:17:12.589199 sshd-session[6409]: pam_unix(sshd:session): session closed for user core Jul 15 23:17:12.592848 systemd[1]: sshd@15-10.200.20.18:22-10.200.16.10:48432.service: Deactivated successfully. Jul 15 23:17:12.595350 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 23:17:12.597071 systemd-logind[1862]: Session 18 logged out. Waiting for processes to exit. Jul 15 23:17:12.600287 systemd-logind[1862]: Removed session 18. Jul 15 23:17:12.674527 systemd[1]: Started sshd@16-10.200.20.18:22-10.200.16.10:48448.service - OpenSSH per-connection server daemon (10.200.16.10:48448). Jul 15 23:17:13.137613 sshd[6420]: Accepted publickey for core from 10.200.16.10 port 48448 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:17:13.139861 sshd-session[6420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:17:13.147628 systemd-logind[1862]: New session 19 of user core. Jul 15 23:17:13.153500 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 15 23:17:14.041255 sshd[6422]: Connection closed by 10.200.16.10 port 48448 Jul 15 23:17:14.041094 sshd-session[6420]: pam_unix(sshd:session): session closed for user core Jul 15 23:17:14.047638 systemd-logind[1862]: Session 19 logged out. Waiting for processes to exit. Jul 15 23:17:14.047815 systemd[1]: sshd@16-10.200.20.18:22-10.200.16.10:48448.service: Deactivated successfully. Jul 15 23:17:14.050994 systemd[1]: session-19.scope: Deactivated successfully. Jul 15 23:17:14.056592 systemd-logind[1862]: Removed session 19. Jul 15 23:17:14.124182 systemd[1]: Started sshd@17-10.200.20.18:22-10.200.16.10:48452.service - OpenSSH per-connection server daemon (10.200.16.10:48452). Jul 15 23:17:14.589306 sshd[6440]: Accepted publickey for core from 10.200.16.10 port 48452 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:17:14.591603 sshd-session[6440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:17:14.598031 systemd-logind[1862]: New session 20 of user core. Jul 15 23:17:14.602539 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 15 23:17:15.130605 sshd[6442]: Connection closed by 10.200.16.10 port 48452 Jul 15 23:17:15.129836 sshd-session[6440]: pam_unix(sshd:session): session closed for user core Jul 15 23:17:15.133611 systemd[1]: sshd@17-10.200.20.18:22-10.200.16.10:48452.service: Deactivated successfully. Jul 15 23:17:15.137019 systemd[1]: session-20.scope: Deactivated successfully. Jul 15 23:17:15.138622 systemd-logind[1862]: Session 20 logged out. Waiting for processes to exit. Jul 15 23:17:15.140774 systemd-logind[1862]: Removed session 20. Jul 15 23:17:15.223467 systemd[1]: Started sshd@18-10.200.20.18:22-10.200.16.10:48458.service - OpenSSH per-connection server daemon (10.200.16.10:48458). Jul 15 23:17:15.718882 sshd[6452]: Accepted publickey for core from 10.200.16.10 port 48458 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:17:15.720215 sshd-session[6452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:17:15.724366 systemd-logind[1862]: New session 21 of user core. Jul 15 23:17:15.730463 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 15 23:17:16.132979 sshd[6454]: Connection closed by 10.200.16.10 port 48458 Jul 15 23:17:16.133780 sshd-session[6452]: pam_unix(sshd:session): session closed for user core Jul 15 23:17:16.140633 systemd[1]: sshd@18-10.200.20.18:22-10.200.16.10:48458.service: Deactivated successfully. Jul 15 23:17:16.143003 systemd[1]: session-21.scope: Deactivated successfully. Jul 15 23:17:16.144948 systemd-logind[1862]: Session 21 logged out. Waiting for processes to exit. Jul 15 23:17:16.147682 systemd-logind[1862]: Removed session 21. Jul 15 23:17:17.777670 containerd[1886]: time="2025-07-15T23:17:17.777622266Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5dd67593d0448dd8ebfcec934764e23b9aee85783602f73485ec8b83052abdc1\" id:\"c23bb5922f18e755dde7738925d391f107dad0e30fc69d4a3077ba4796661004\" pid:6477 exited_at:{seconds:1752621437 nanos:776994130}" Jul 15 23:17:21.213800 systemd[1]: Started sshd@19-10.200.20.18:22-10.200.16.10:58332.service - OpenSSH per-connection server daemon (10.200.16.10:58332). Jul 15 23:17:21.653289 sshd[6490]: Accepted publickey for core from 10.200.16.10 port 58332 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:17:21.654488 sshd-session[6490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:17:21.658452 systemd-logind[1862]: New session 22 of user core. Jul 15 23:17:21.666459 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 15 23:17:22.024863 sshd[6492]: Connection closed by 10.200.16.10 port 58332 Jul 15 23:17:22.025255 sshd-session[6490]: pam_unix(sshd:session): session closed for user core Jul 15 23:17:22.029153 systemd-logind[1862]: Session 22 logged out. Waiting for processes to exit. Jul 15 23:17:22.030009 systemd[1]: sshd@19-10.200.20.18:22-10.200.16.10:58332.service: Deactivated successfully. Jul 15 23:17:22.032023 systemd[1]: session-22.scope: Deactivated successfully. Jul 15 23:17:22.033810 systemd-logind[1862]: Removed session 22. Jul 15 23:17:24.217755 containerd[1886]: time="2025-07-15T23:17:24.217646768Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e59d943a027a624d171db33e28ef2145c77369c57294f4af647a3df75f8627dc\" id:\"cffec91543c39f013402c247f808e469996488be701559e11c5ab19d95b0b80a\" pid:6513 exited_at:{seconds:1752621444 nanos:217331216}" Jul 15 23:17:27.112578 systemd[1]: Started sshd@20-10.200.20.18:22-10.200.16.10:58336.service - OpenSSH per-connection server daemon (10.200.16.10:58336). Jul 15 23:17:27.586929 sshd[6525]: Accepted publickey for core from 10.200.16.10 port 58336 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:17:27.588173 sshd-session[6525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:17:27.592591 systemd-logind[1862]: New session 23 of user core. Jul 15 23:17:27.598458 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 15 23:17:27.968564 sshd[6529]: Connection closed by 10.200.16.10 port 58336 Jul 15 23:17:27.969068 sshd-session[6525]: pam_unix(sshd:session): session closed for user core Jul 15 23:17:27.972763 systemd[1]: sshd@20-10.200.20.18:22-10.200.16.10:58336.service: Deactivated successfully. Jul 15 23:17:27.974750 systemd[1]: session-23.scope: Deactivated successfully. Jul 15 23:17:27.975788 systemd-logind[1862]: Session 23 logged out. Waiting for processes to exit. Jul 15 23:17:27.977239 systemd-logind[1862]: Removed session 23. Jul 15 23:17:33.066521 systemd[1]: Started sshd@21-10.200.20.18:22-10.200.16.10:34952.service - OpenSSH per-connection server daemon (10.200.16.10:34952). Jul 15 23:17:33.556827 sshd[6540]: Accepted publickey for core from 10.200.16.10 port 34952 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:17:33.558180 sshd-session[6540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:17:33.562180 systemd-logind[1862]: New session 24 of user core. Jul 15 23:17:33.568602 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 15 23:17:33.961835 sshd[6542]: Connection closed by 10.200.16.10 port 34952 Jul 15 23:17:33.961314 sshd-session[6540]: pam_unix(sshd:session): session closed for user core Jul 15 23:17:33.964419 systemd[1]: sshd@21-10.200.20.18:22-10.200.16.10:34952.service: Deactivated successfully. Jul 15 23:17:33.966832 systemd[1]: session-24.scope: Deactivated successfully. Jul 15 23:17:33.970297 systemd-logind[1862]: Session 24 logged out. Waiting for processes to exit. Jul 15 23:17:33.971220 systemd-logind[1862]: Removed session 24. Jul 15 23:17:36.217859 containerd[1886]: time="2025-07-15T23:17:36.217808940Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e7d81d982050d35367d5f2968c40c19d75680e4448baf35b53a05ebab93e6c3d\" id:\"03ef1e54ec4380dfd247e211020e430859a4b49e34113caa735b678be2cf339d\" pid:6566 exited_at:{seconds:1752621456 nanos:217338135}" Jul 15 23:17:39.052181 systemd[1]: Started sshd@22-10.200.20.18:22-10.200.16.10:34960.service - OpenSSH per-connection server daemon (10.200.16.10:34960). Jul 15 23:17:39.529875 sshd[6577]: Accepted publickey for core from 10.200.16.10 port 34960 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:17:39.531186 sshd-session[6577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:17:39.535468 systemd-logind[1862]: New session 25 of user core. Jul 15 23:17:39.538413 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 15 23:17:39.928744 sshd[6579]: Connection closed by 10.200.16.10 port 34960 Jul 15 23:17:39.929382 sshd-session[6577]: pam_unix(sshd:session): session closed for user core Jul 15 23:17:39.933227 systemd[1]: sshd@22-10.200.20.18:22-10.200.16.10:34960.service: Deactivated successfully. Jul 15 23:17:39.935391 systemd[1]: session-25.scope: Deactivated successfully. Jul 15 23:17:39.936957 systemd-logind[1862]: Session 25 logged out. Waiting for processes to exit. Jul 15 23:17:39.938490 systemd-logind[1862]: Removed session 25. Jul 15 23:17:45.023715 systemd[1]: Started sshd@23-10.200.20.18:22-10.200.16.10:35202.service - OpenSSH per-connection server daemon (10.200.16.10:35202). Jul 15 23:17:45.504806 sshd[6591]: Accepted publickey for core from 10.200.16.10 port 35202 ssh2: RSA SHA256:/Pq5CjVUHr4RzIGdPPrQRJ932WsSSxmZlOV9aisTcGk Jul 15 23:17:45.506091 sshd-session[6591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:17:45.510336 systemd-logind[1862]: New session 26 of user core. Jul 15 23:17:45.515474 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 15 23:17:45.889057 sshd[6593]: Connection closed by 10.200.16.10 port 35202 Jul 15 23:17:45.889512 sshd-session[6591]: pam_unix(sshd:session): session closed for user core Jul 15 23:17:45.894193 systemd[1]: sshd@23-10.200.20.18:22-10.200.16.10:35202.service: Deactivated successfully. Jul 15 23:17:45.896576 systemd[1]: session-26.scope: Deactivated successfully. Jul 15 23:17:45.897628 systemd-logind[1862]: Session 26 logged out. Waiting for processes to exit. Jul 15 23:17:45.899360 systemd-logind[1862]: Removed session 26. Jul 15 23:17:47.729055 containerd[1886]: time="2025-07-15T23:17:47.729003054Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5dd67593d0448dd8ebfcec934764e23b9aee85783602f73485ec8b83052abdc1\" id:\"541e1e200aa6cad099796b47b27a7759e5e1fb88df7ff56469dc6bc53f91e27c\" pid:6616 exited_at:{seconds:1752621467 nanos:728722854}"