Oct 13 04:58:34.358681 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Oct 13 04:58:34.358700 kernel: Linux version 6.12.51-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Mon Oct 13 03:30:16 -00 2025 Oct 13 04:58:34.358707 kernel: KASLR enabled Oct 13 04:58:34.358712 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Oct 13 04:58:34.358716 kernel: printk: legacy bootconsole [pl11] enabled Oct 13 04:58:34.358720 kernel: efi: EFI v2.7 by EDK II Oct 13 04:58:34.358726 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e018 RNG=0x3fd5f998 MEMRESERVE=0x3e471598 Oct 13 04:58:34.358730 kernel: random: crng init done Oct 13 04:58:34.358734 kernel: secureboot: Secure boot disabled Oct 13 04:58:34.358738 kernel: ACPI: Early table checksum verification disabled Oct 13 04:58:34.358742 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Oct 13 04:58:34.358746 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 13 04:58:34.358751 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 13 04:58:34.358756 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Oct 13 04:58:34.358761 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 13 04:58:34.358766 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 13 04:58:34.358771 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 13 04:58:34.358776 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 13 04:58:34.358780 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 13 04:58:34.358785 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 13 04:58:34.358789 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Oct 13 04:58:34.358794 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 13 04:58:34.358798 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Oct 13 04:58:34.358803 kernel: ACPI: Use ACPI SPCR as default console: No Oct 13 04:58:34.358807 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Oct 13 04:58:34.358812 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Oct 13 04:58:34.358817 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Oct 13 04:58:34.358822 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Oct 13 04:58:34.358826 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Oct 13 04:58:34.358831 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Oct 13 04:58:34.358835 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Oct 13 04:58:34.358840 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Oct 13 04:58:34.358844 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Oct 13 04:58:34.358849 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Oct 13 04:58:34.358853 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Oct 13 04:58:34.358858 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Oct 13 04:58:34.358862 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Oct 13 04:58:34.358867 kernel: NODE_DATA(0) allocated [mem 0x1bf7fda00-0x1bf804fff] Oct 13 04:58:34.358872 kernel: Zone ranges: Oct 13 04:58:34.358877 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Oct 13 04:58:34.358883 kernel: DMA32 empty Oct 13 04:58:34.358889 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Oct 13 04:58:34.358894 kernel: Device empty Oct 13 04:58:34.358898 kernel: Movable zone start for each node Oct 13 04:58:34.358903 kernel: Early memory node ranges Oct 13 04:58:34.358908 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Oct 13 04:58:34.358912 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Oct 13 04:58:34.358917 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Oct 13 04:58:34.358922 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Oct 13 04:58:34.358926 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Oct 13 04:58:34.358931 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Oct 13 04:58:34.358936 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Oct 13 04:58:34.358941 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Oct 13 04:58:34.358946 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Oct 13 04:58:34.358950 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Oct 13 04:58:34.358955 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Oct 13 04:58:34.358960 kernel: cma: Reserved 16 MiB at 0x000000003d400000 on node -1 Oct 13 04:58:34.358965 kernel: psci: probing for conduit method from ACPI. Oct 13 04:58:34.358969 kernel: psci: PSCIv1.1 detected in firmware. Oct 13 04:58:34.358974 kernel: psci: Using standard PSCI v0.2 function IDs Oct 13 04:58:34.358979 kernel: psci: MIGRATE_INFO_TYPE not supported. Oct 13 04:58:34.358983 kernel: psci: SMC Calling Convention v1.4 Oct 13 04:58:34.358989 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Oct 13 04:58:34.358994 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Oct 13 04:58:34.358998 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Oct 13 04:58:34.359003 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Oct 13 04:58:34.359008 kernel: pcpu-alloc: [0] 0 [0] 1 Oct 13 04:58:34.359013 kernel: Detected PIPT I-cache on CPU0 Oct 13 04:58:34.359017 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Oct 13 04:58:34.359022 kernel: CPU features: detected: GIC system register CPU interface Oct 13 04:58:34.359027 kernel: CPU features: detected: Spectre-v4 Oct 13 04:58:34.359032 kernel: CPU features: detected: Spectre-BHB Oct 13 04:58:34.359036 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 13 04:58:34.359042 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 13 04:58:34.359047 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Oct 13 04:58:34.359051 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 13 04:58:34.359056 kernel: alternatives: applying boot alternatives Oct 13 04:58:34.359062 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=1a81e36b39d22063d1d9b2ac3307af6d1e57cfd926c8fafd214fb74284e73d99 Oct 13 04:58:34.359067 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 13 04:58:34.359071 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 13 04:58:34.359076 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 13 04:58:34.359081 kernel: Fallback order for Node 0: 0 Oct 13 04:58:34.359086 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Oct 13 04:58:34.359091 kernel: Policy zone: Normal Oct 13 04:58:34.359096 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 13 04:58:34.359100 kernel: software IO TLB: area num 2. Oct 13 04:58:34.359105 kernel: software IO TLB: mapped [mem 0x0000000037bd0000-0x000000003bbd0000] (64MB) Oct 13 04:58:34.359110 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 13 04:58:34.359115 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 13 04:58:34.359120 kernel: rcu: RCU event tracing is enabled. Oct 13 04:58:34.359125 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 13 04:58:34.359130 kernel: Trampoline variant of Tasks RCU enabled. Oct 13 04:58:34.359134 kernel: Tracing variant of Tasks RCU enabled. Oct 13 04:58:34.359139 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 13 04:58:34.359145 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 13 04:58:34.359150 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 13 04:58:34.359155 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 13 04:58:34.359159 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 13 04:58:34.359164 kernel: GICv3: 960 SPIs implemented Oct 13 04:58:34.359169 kernel: GICv3: 0 Extended SPIs implemented Oct 13 04:58:34.359173 kernel: Root IRQ handler: gic_handle_irq Oct 13 04:58:34.359178 kernel: GICv3: GICv3 features: 16 PPIs, RSS Oct 13 04:58:34.359183 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Oct 13 04:58:34.359187 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Oct 13 04:58:34.359192 kernel: ITS: No ITS available, not enabling LPIs Oct 13 04:58:34.359198 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 13 04:58:34.359202 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Oct 13 04:58:34.359207 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 13 04:58:34.359212 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Oct 13 04:58:34.359217 kernel: Console: colour dummy device 80x25 Oct 13 04:58:34.359222 kernel: printk: legacy console [tty1] enabled Oct 13 04:58:34.359227 kernel: ACPI: Core revision 20240827 Oct 13 04:58:34.359232 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Oct 13 04:58:34.359237 kernel: pid_max: default: 32768 minimum: 301 Oct 13 04:58:34.359243 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 13 04:58:34.359250 kernel: landlock: Up and running. Oct 13 04:58:34.359255 kernel: SELinux: Initializing. Oct 13 04:58:34.359261 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 13 04:58:34.359269 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 13 04:58:34.359275 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x1a0000e, misc 0x31e1 Oct 13 04:58:34.359280 kernel: Hyper-V: Host Build 10.0.26100.1261-1-0 Oct 13 04:58:34.359285 kernel: Hyper-V: enabling crash_kexec_post_notifiers Oct 13 04:58:34.359291 kernel: rcu: Hierarchical SRCU implementation. Oct 13 04:58:34.359297 kernel: rcu: Max phase no-delay instances is 400. Oct 13 04:58:34.359302 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 13 04:58:34.359307 kernel: Remapping and enabling EFI services. Oct 13 04:58:34.359312 kernel: smp: Bringing up secondary CPUs ... Oct 13 04:58:34.359318 kernel: Detected PIPT I-cache on CPU1 Oct 13 04:58:34.359324 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Oct 13 04:58:34.359329 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Oct 13 04:58:34.359334 kernel: smp: Brought up 1 node, 2 CPUs Oct 13 04:58:34.359339 kernel: SMP: Total of 2 processors activated. Oct 13 04:58:34.359344 kernel: CPU: All CPU(s) started at EL1 Oct 13 04:58:34.359349 kernel: CPU features: detected: 32-bit EL0 Support Oct 13 04:58:34.359355 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Oct 13 04:58:34.359361 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 13 04:58:34.359366 kernel: CPU features: detected: Common not Private translations Oct 13 04:58:34.359371 kernel: CPU features: detected: CRC32 instructions Oct 13 04:58:34.359376 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Oct 13 04:58:34.359381 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 13 04:58:34.359386 kernel: CPU features: detected: LSE atomic instructions Oct 13 04:58:34.359393 kernel: CPU features: detected: Privileged Access Never Oct 13 04:58:34.359398 kernel: CPU features: detected: Speculation barrier (SB) Oct 13 04:58:34.359403 kernel: CPU features: detected: TLB range maintenance instructions Oct 13 04:58:34.359408 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 13 04:58:34.359414 kernel: CPU features: detected: Scalable Vector Extension Oct 13 04:58:34.359419 kernel: alternatives: applying system-wide alternatives Oct 13 04:58:34.359424 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Oct 13 04:58:34.359429 kernel: SVE: maximum available vector length 16 bytes per vector Oct 13 04:58:34.359435 kernel: SVE: default vector length 16 bytes per vector Oct 13 04:58:34.359440 kernel: Memory: 3985524K/4194160K available (11200K kernel code, 2456K rwdata, 9080K rodata, 12992K init, 1038K bss, 187448K reserved, 16384K cma-reserved) Oct 13 04:58:34.359446 kernel: devtmpfs: initialized Oct 13 04:58:34.359451 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 13 04:58:34.359456 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 13 04:58:34.359461 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 13 04:58:34.359467 kernel: 0 pages in range for non-PLT usage Oct 13 04:58:34.359472 kernel: 515040 pages in range for PLT usage Oct 13 04:58:34.361526 kernel: pinctrl core: initialized pinctrl subsystem Oct 13 04:58:34.361535 kernel: SMBIOS 3.1.0 present. Oct 13 04:58:34.361541 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Oct 13 04:58:34.361547 kernel: DMI: Memory slots populated: 2/2 Oct 13 04:58:34.361552 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 13 04:58:34.361558 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 13 04:58:34.361568 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 13 04:58:34.361573 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 13 04:58:34.361578 kernel: audit: initializing netlink subsys (disabled) Oct 13 04:58:34.361584 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Oct 13 04:58:34.361589 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 13 04:58:34.361594 kernel: cpuidle: using governor menu Oct 13 04:58:34.361600 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 13 04:58:34.361606 kernel: ASID allocator initialised with 32768 entries Oct 13 04:58:34.361612 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 13 04:58:34.361617 kernel: Serial: AMBA PL011 UART driver Oct 13 04:58:34.361622 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 13 04:58:34.361627 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 13 04:58:34.361633 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 13 04:58:34.361638 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 13 04:58:34.361644 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 13 04:58:34.361650 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 13 04:58:34.361655 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 13 04:58:34.361660 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 13 04:58:34.361666 kernel: ACPI: Added _OSI(Module Device) Oct 13 04:58:34.361671 kernel: ACPI: Added _OSI(Processor Device) Oct 13 04:58:34.361676 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 13 04:58:34.361683 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 13 04:58:34.361688 kernel: ACPI: Interpreter enabled Oct 13 04:58:34.361693 kernel: ACPI: Using GIC for interrupt routing Oct 13 04:58:34.361698 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Oct 13 04:58:34.361704 kernel: printk: legacy console [ttyAMA0] enabled Oct 13 04:58:34.361709 kernel: printk: legacy bootconsole [pl11] disabled Oct 13 04:58:34.361714 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Oct 13 04:58:34.361721 kernel: ACPI: CPU0 has been hot-added Oct 13 04:58:34.361726 kernel: ACPI: CPU1 has been hot-added Oct 13 04:58:34.361731 kernel: iommu: Default domain type: Translated Oct 13 04:58:34.361736 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 13 04:58:34.361742 kernel: efivars: Registered efivars operations Oct 13 04:58:34.361747 kernel: vgaarb: loaded Oct 13 04:58:34.361752 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 13 04:58:34.361758 kernel: VFS: Disk quotas dquot_6.6.0 Oct 13 04:58:34.361764 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 13 04:58:34.361769 kernel: pnp: PnP ACPI init Oct 13 04:58:34.361774 kernel: pnp: PnP ACPI: found 0 devices Oct 13 04:58:34.361779 kernel: NET: Registered PF_INET protocol family Oct 13 04:58:34.361785 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 13 04:58:34.361790 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 13 04:58:34.361795 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 13 04:58:34.361802 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 13 04:58:34.361807 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 13 04:58:34.361812 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 13 04:58:34.361817 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 13 04:58:34.361823 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 13 04:58:34.361828 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 13 04:58:34.361834 kernel: PCI: CLS 0 bytes, default 64 Oct 13 04:58:34.361839 kernel: kvm [1]: HYP mode not available Oct 13 04:58:34.361844 kernel: Initialise system trusted keyrings Oct 13 04:58:34.361849 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 13 04:58:34.361855 kernel: Key type asymmetric registered Oct 13 04:58:34.361860 kernel: Asymmetric key parser 'x509' registered Oct 13 04:58:34.361865 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 13 04:58:34.361870 kernel: io scheduler mq-deadline registered Oct 13 04:58:34.361876 kernel: io scheduler kyber registered Oct 13 04:58:34.361881 kernel: io scheduler bfq registered Oct 13 04:58:34.361886 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 13 04:58:34.361892 kernel: thunder_xcv, ver 1.0 Oct 13 04:58:34.361897 kernel: thunder_bgx, ver 1.0 Oct 13 04:58:34.361902 kernel: nicpf, ver 1.0 Oct 13 04:58:34.361907 kernel: nicvf, ver 1.0 Oct 13 04:58:34.362074 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 13 04:58:34.362150 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-10-13T04:58:30 UTC (1760331510) Oct 13 04:58:34.362157 kernel: efifb: probing for efifb Oct 13 04:58:34.362162 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Oct 13 04:58:34.362168 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Oct 13 04:58:34.362173 kernel: efifb: scrolling: redraw Oct 13 04:58:34.362180 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Oct 13 04:58:34.362186 kernel: Console: switching to colour frame buffer device 128x48 Oct 13 04:58:34.362191 kernel: fb0: EFI VGA frame buffer device Oct 13 04:58:34.362196 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Oct 13 04:58:34.362202 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 13 04:58:34.362207 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Oct 13 04:58:34.362212 kernel: NET: Registered PF_INET6 protocol family Oct 13 04:58:34.362218 kernel: watchdog: NMI not fully supported Oct 13 04:58:34.362224 kernel: watchdog: Hard watchdog permanently disabled Oct 13 04:58:34.362229 kernel: Segment Routing with IPv6 Oct 13 04:58:34.362234 kernel: In-situ OAM (IOAM) with IPv6 Oct 13 04:58:34.362240 kernel: NET: Registered PF_PACKET protocol family Oct 13 04:58:34.362245 kernel: Key type dns_resolver registered Oct 13 04:58:34.362250 kernel: registered taskstats version 1 Oct 13 04:58:34.362256 kernel: Loading compiled-in X.509 certificates Oct 13 04:58:34.362262 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.51-flatcar: 0d5be6bcdaeaf26c55e47d87e2567b03196058e4' Oct 13 04:58:34.362267 kernel: Demotion targets for Node 0: null Oct 13 04:58:34.362272 kernel: Key type .fscrypt registered Oct 13 04:58:34.362277 kernel: Key type fscrypt-provisioning registered Oct 13 04:58:34.362283 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 13 04:58:34.362288 kernel: ima: Allocated hash algorithm: sha1 Oct 13 04:58:34.362294 kernel: ima: No architecture policies found Oct 13 04:58:34.362300 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 13 04:58:34.362305 kernel: clk: Disabling unused clocks Oct 13 04:58:34.362310 kernel: PM: genpd: Disabling unused power domains Oct 13 04:58:34.362315 kernel: Freeing unused kernel memory: 12992K Oct 13 04:58:34.362320 kernel: Run /init as init process Oct 13 04:58:34.362326 kernel: with arguments: Oct 13 04:58:34.362331 kernel: /init Oct 13 04:58:34.362337 kernel: with environment: Oct 13 04:58:34.362342 kernel: HOME=/ Oct 13 04:58:34.362347 kernel: TERM=linux Oct 13 04:58:34.362352 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 13 04:58:34.362357 kernel: hv_vmbus: Vmbus version:5.3 Oct 13 04:58:34.362363 kernel: Invalid ELF header magic: != \u007fELF Oct 13 04:58:34.362369 kernel: Invalid ELF header magic: != \u007fELF Oct 13 04:58:34.362375 kernel: hv_vmbus: registering driver hid_hyperv Oct 13 04:58:34.362380 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Oct 13 04:58:34.362468 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Oct 13 04:58:34.362487 kernel: SCSI subsystem initialized Oct 13 04:58:34.362493 kernel: Invalid ELF header magic: != \u007fELF Oct 13 04:58:34.362498 kernel: hv_vmbus: registering driver hyperv_keyboard Oct 13 04:58:34.362504 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Oct 13 04:58:34.362511 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 13 04:58:34.362516 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 13 04:58:34.362521 kernel: PTP clock support registered Oct 13 04:58:34.362527 kernel: hv_utils: Registering HyperV Utility Driver Oct 13 04:58:34.362532 kernel: hv_vmbus: registering driver hv_utils Oct 13 04:58:34.362537 kernel: hv_utils: Heartbeat IC version 3.0 Oct 13 04:58:34.362542 kernel: hv_utils: Shutdown IC version 3.2 Oct 13 04:58:34.362548 kernel: hv_utils: TimeSync IC version 4.0 Oct 13 04:58:34.362554 kernel: hv_vmbus: registering driver hv_storvsc Oct 13 04:58:34.362654 kernel: scsi host0: storvsc_host_t Oct 13 04:58:34.362734 kernel: scsi host1: storvsc_host_t Oct 13 04:58:34.362820 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Oct 13 04:58:34.362908 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Oct 13 04:58:34.362985 kernel: sd 0:0:0:0: [sda] 71737344 512-byte logical blocks: (36.7 GB/34.2 GiB) Oct 13 04:58:34.363060 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Oct 13 04:58:34.363135 kernel: sd 0:0:0:0: [sda] Write Protect is off Oct 13 04:58:34.363209 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Oct 13 04:58:34.363283 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Oct 13 04:58:34.363368 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#253 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Oct 13 04:58:34.363437 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Oct 13 04:58:34.363444 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 13 04:58:34.363535 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Oct 13 04:58:34.363612 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Oct 13 04:58:34.363619 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 13 04:58:34.363695 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Oct 13 04:58:34.363701 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 13 04:58:34.363706 kernel: device-mapper: uevent: version 1.0.3 Oct 13 04:58:34.363712 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 13 04:58:34.363717 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Oct 13 04:58:34.363723 kernel: Invalid ELF header magic: != \u007fELF Oct 13 04:58:34.363728 kernel: Invalid ELF header magic: != \u007fELF Oct 13 04:58:34.363734 kernel: Invalid ELF header magic: != \u007fELF Oct 13 04:58:34.363739 kernel: raid6: neonx8 gen() 18518 MB/s Oct 13 04:58:34.363744 kernel: raid6: neonx4 gen() 18561 MB/s Oct 13 04:58:34.363749 kernel: raid6: neonx2 gen() 17069 MB/s Oct 13 04:58:34.363755 kernel: raid6: neonx1 gen() 14427 MB/s Oct 13 04:58:34.363760 kernel: raid6: int64x8 gen() 10558 MB/s Oct 13 04:58:34.363765 kernel: raid6: int64x4 gen() 10623 MB/s Oct 13 04:58:34.363771 kernel: raid6: int64x2 gen() 8980 MB/s Oct 13 04:58:34.363776 kernel: raid6: int64x1 gen() 7022 MB/s Oct 13 04:58:34.363781 kernel: raid6: using algorithm neonx4 gen() 18561 MB/s Oct 13 04:58:34.363786 kernel: raid6: .... xor() 15152 MB/s, rmw enabled Oct 13 04:58:34.363792 kernel: raid6: using neon recovery algorithm Oct 13 04:58:34.363797 kernel: Invalid ELF header magic: != \u007fELF Oct 13 04:58:34.363802 kernel: Invalid ELF header magic: != \u007fELF Oct 13 04:58:34.363807 kernel: Invalid ELF header magic: != \u007fELF Oct 13 04:58:34.363813 kernel: Invalid ELF header magic: != \u007fELF Oct 13 04:58:34.363818 kernel: xor: measuring software checksum speed Oct 13 04:58:34.363823 kernel: 8regs : 28597 MB/sec Oct 13 04:58:34.363829 kernel: 32regs : 28826 MB/sec Oct 13 04:58:34.363834 kernel: arm64_neon : 37638 MB/sec Oct 13 04:58:34.363839 kernel: xor: using function: arm64_neon (37638 MB/sec) Oct 13 04:58:34.363844 kernel: Invalid ELF header magic: != \u007fELF Oct 13 04:58:34.363849 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 13 04:58:34.363856 kernel: BTRFS: device fsid 976d1a25-6e06-4ce9-b674-96d83e61f95d devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (340) Oct 13 04:58:34.363861 kernel: BTRFS info (device dm-0): first mount of filesystem 976d1a25-6e06-4ce9-b674-96d83e61f95d Oct 13 04:58:34.363867 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 13 04:58:34.363872 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 13 04:58:34.363877 kernel: BTRFS info (device dm-0): enabling free space tree Oct 13 04:58:34.363883 kernel: Invalid ELF header magic: != \u007fELF Oct 13 04:58:34.363888 kernel: loop: module loaded Oct 13 04:58:34.363894 kernel: loop0: detected capacity change from 0 to 91456 Oct 13 04:58:34.363899 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 13 04:58:34.363906 systemd[1]: Successfully made /usr/ read-only. Oct 13 04:58:34.363914 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 04:58:34.363920 systemd[1]: Detected virtualization microsoft. Oct 13 04:58:34.363925 systemd[1]: Detected architecture arm64. Oct 13 04:58:34.363931 systemd[1]: Running in initrd. Oct 13 04:58:34.363937 systemd[1]: No hostname configured, using default hostname. Oct 13 04:58:34.363943 systemd[1]: Hostname set to . Oct 13 04:58:34.363949 systemd[1]: Initializing machine ID from random generator. Oct 13 04:58:34.363954 systemd[1]: Queued start job for default target initrd.target. Oct 13 04:58:34.363960 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 13 04:58:34.363966 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 04:58:34.363972 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 04:58:34.363978 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 13 04:58:34.363989 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 04:58:34.363997 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 13 04:58:34.364004 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 13 04:58:34.364010 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 04:58:34.364016 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 04:58:34.364021 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 13 04:58:34.364027 systemd[1]: Reached target paths.target - Path Units. Oct 13 04:58:34.364033 systemd[1]: Reached target slices.target - Slice Units. Oct 13 04:58:34.364040 systemd[1]: Reached target swap.target - Swaps. Oct 13 04:58:34.364045 systemd[1]: Reached target timers.target - Timer Units. Oct 13 04:58:34.364051 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 04:58:34.364057 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 04:58:34.364063 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 13 04:58:34.364068 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 13 04:58:34.364074 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 04:58:34.364081 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 04:58:34.364087 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 04:58:34.364093 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 04:58:34.364099 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 13 04:58:34.364105 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 13 04:58:34.364111 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 04:58:34.364118 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 13 04:58:34.364124 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 13 04:58:34.364130 systemd[1]: Starting systemd-fsck-usr.service... Oct 13 04:58:34.364135 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 04:58:34.364141 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 04:58:34.364163 systemd-journald[476]: Collecting audit messages is disabled. Oct 13 04:58:34.364179 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 04:58:34.364187 systemd-journald[476]: Journal started Oct 13 04:58:34.364202 systemd-journald[476]: Runtime Journal (/run/log/journal/ad736281f4cf4af99d6378d4aafcfa73) is 8M, max 78.5M, 70.5M free. Oct 13 04:58:34.378392 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 04:58:34.379182 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 13 04:58:34.383723 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 04:58:34.390060 systemd[1]: Finished systemd-fsck-usr.service. Oct 13 04:58:34.403756 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 04:58:34.426497 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 13 04:58:34.430397 systemd-modules-load[479]: Inserted module 'br_netfilter' Oct 13 04:58:34.434572 kernel: Bridge firewalling registered Oct 13 04:58:34.435245 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 04:58:34.442517 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 04:58:34.453314 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 04:58:34.470833 systemd-tmpfiles[493]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 13 04:58:34.477495 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 04:58:34.488634 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 04:58:34.498104 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 04:58:34.507006 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 04:58:34.514131 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 04:58:34.538303 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 04:58:34.553569 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 04:58:34.564644 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 13 04:58:34.630857 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 04:58:34.714590 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 13 04:58:34.720731 systemd-resolved[502]: Positive Trust Anchors: Oct 13 04:58:34.720739 systemd-resolved[502]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 04:58:34.720741 systemd-resolved[502]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 13 04:58:34.720760 systemd-resolved[502]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 04:58:34.735604 systemd-resolved[502]: Defaulting to hostname 'linux'. Oct 13 04:58:34.736329 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 04:58:34.746236 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 04:58:34.795270 dracut-cmdline[521]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=1a81e36b39d22063d1d9b2ac3307af6d1e57cfd926c8fafd214fb74284e73d99 Oct 13 04:58:34.936495 kernel: Loading iSCSI transport class v2.0-870. Oct 13 04:58:34.976515 kernel: iscsi: registered transport (tcp) Oct 13 04:58:35.004756 kernel: iscsi: registered transport (qla4xxx) Oct 13 04:58:35.004802 kernel: QLogic iSCSI HBA Driver Oct 13 04:58:35.071248 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 04:58:35.091622 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 04:58:35.097851 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 04:58:35.146633 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 13 04:58:35.155622 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 13 04:58:35.159879 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 13 04:58:35.196848 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 13 04:58:35.206646 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 04:58:35.294196 systemd-udevd[750]: Using default interface naming scheme 'v257'. Oct 13 04:58:35.299788 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 04:58:35.310509 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 04:58:35.320014 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 13 04:58:35.337597 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 04:58:35.353007 dracut-pre-trigger[858]: rd.md=0: removing MD RAID activation Oct 13 04:58:35.380855 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 04:58:35.386578 systemd-networkd[859]: lo: Link UP Oct 13 04:58:35.386581 systemd-networkd[859]: lo: Gained carrier Oct 13 04:58:35.387300 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 04:58:35.396060 systemd[1]: Reached target network.target - Network. Oct 13 04:58:35.402383 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 04:58:35.454633 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 04:58:35.462126 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 13 04:58:35.533559 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#247 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Oct 13 04:58:35.555471 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 04:58:35.555592 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 04:58:35.580575 kernel: hv_vmbus: registering driver hv_netvsc Oct 13 04:58:35.565042 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 04:58:35.573701 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 04:58:35.600871 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 04:58:35.601319 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 04:58:35.619337 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 04:58:35.642596 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 04:58:35.656500 kernel: hv_netvsc 0022487b-3886-0022-487b-38860022487b eth0: VF slot 1 added Oct 13 04:58:35.676823 systemd-networkd[859]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 04:58:35.688501 kernel: hv_vmbus: registering driver hv_pci Oct 13 04:58:35.688525 kernel: hv_pci b5ab9cf4-fbbd-472f-95ce-62cac2ffc169: PCI VMBus probing: Using version 0x10004 Oct 13 04:58:35.676831 systemd-networkd[859]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 04:58:35.677886 systemd-networkd[859]: eth0: Link UP Oct 13 04:58:35.710973 kernel: hv_pci b5ab9cf4-fbbd-472f-95ce-62cac2ffc169: PCI host bridge to bus fbbd:00 Oct 13 04:58:35.711161 kernel: pci_bus fbbd:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Oct 13 04:58:35.711277 kernel: pci_bus fbbd:00: No busn resource found for root bus, will use [bus 00-ff] Oct 13 04:58:35.678223 systemd-networkd[859]: eth0: Gained carrier Oct 13 04:58:35.722483 kernel: pci fbbd:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Oct 13 04:58:35.678233 systemd-networkd[859]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 04:58:35.734948 kernel: pci fbbd:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Oct 13 04:58:35.740559 kernel: pci fbbd:00:02.0: enabling Extended Tags Oct 13 04:58:35.753578 kernel: pci fbbd:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at fbbd:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Oct 13 04:58:35.762850 kernel: pci_bus fbbd:00: busn_res: [bus 00-ff] end is updated to 00 Oct 13 04:58:35.763048 kernel: pci fbbd:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Oct 13 04:58:35.765550 systemd-networkd[859]: eth0: DHCPv4 address 10.200.20.16/24, gateway 10.200.20.1 acquired from 168.63.129.16 Oct 13 04:58:35.934540 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Oct 13 04:58:35.946619 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 13 04:58:35.966683 kernel: mlx5_core fbbd:00:02.0: enabling device (0000 -> 0002) Oct 13 04:58:35.976122 kernel: mlx5_core fbbd:00:02.0: PTM is not supported by PCIe Oct 13 04:58:35.976351 kernel: mlx5_core fbbd:00:02.0: firmware version: 16.30.5006 Oct 13 04:58:36.044372 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Oct 13 04:58:36.063247 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Oct 13 04:58:36.150144 kernel: hv_netvsc 0022487b-3886-0022-487b-38860022487b eth0: VF registering: eth1 Oct 13 04:58:36.150383 kernel: mlx5_core fbbd:00:02.0 eth1: joined to eth0 Oct 13 04:58:36.155790 kernel: mlx5_core fbbd:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Oct 13 04:58:36.166133 systemd-networkd[859]: eth1: Interface name change detected, renamed to enP64445s1. Oct 13 04:58:36.170773 kernel: mlx5_core fbbd:00:02.0 enP64445s1: renamed from eth1 Oct 13 04:58:36.188998 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Oct 13 04:58:36.271011 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 13 04:58:36.280151 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 04:58:36.285002 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 04:58:36.294086 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 04:58:36.306711 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 13 04:58:36.317967 kernel: mlx5_core fbbd:00:02.0 enP64445s1: Link up Oct 13 04:58:36.334041 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 13 04:58:36.347855 systemd-networkd[859]: enP64445s1: Link UP Oct 13 04:58:36.351063 kernel: hv_netvsc 0022487b-3886-0022-487b-38860022487b eth0: Data path switched to VF: enP64445s1 Oct 13 04:58:36.678748 systemd-networkd[859]: enP64445s1: Gained carrier Oct 13 04:58:37.187073 disk-uuid[967]: Warning: The kernel is still using the old partition table. Oct 13 04:58:37.187073 disk-uuid[967]: The new table will be used at the next reboot or after you Oct 13 04:58:37.187073 disk-uuid[967]: run partprobe(8) or kpartx(8) Oct 13 04:58:37.187073 disk-uuid[967]: The operation has completed successfully. Oct 13 04:58:37.200415 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 13 04:58:37.200527 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 13 04:58:37.210620 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 13 04:58:37.271498 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1083) Oct 13 04:58:37.280933 kernel: BTRFS info (device sda6): first mount of filesystem e9d5eae2-c289-4bda-a378-1699d81be8dc Oct 13 04:58:37.280952 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Oct 13 04:58:37.327531 kernel: BTRFS info (device sda6): turning on async discard Oct 13 04:58:37.327549 kernel: BTRFS info (device sda6): enabling free space tree Oct 13 04:58:37.336539 kernel: BTRFS info (device sda6): last unmount of filesystem e9d5eae2-c289-4bda-a378-1699d81be8dc Oct 13 04:58:37.337556 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 13 04:58:37.342250 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 13 04:58:37.358574 systemd-networkd[859]: eth0: Gained IPv6LL Oct 13 04:58:38.676720 ignition[1102]: Ignition 2.22.0 Oct 13 04:58:38.679431 ignition[1102]: Stage: fetch-offline Oct 13 04:58:38.679574 ignition[1102]: no configs at "/usr/lib/ignition/base.d" Oct 13 04:58:38.683388 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 04:58:38.679582 ignition[1102]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 13 04:58:38.689055 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 13 04:58:38.679654 ignition[1102]: parsed url from cmdline: "" Oct 13 04:58:38.679657 ignition[1102]: no config URL provided Oct 13 04:58:38.679661 ignition[1102]: reading system config file "/usr/lib/ignition/user.ign" Oct 13 04:58:38.679668 ignition[1102]: no config at "/usr/lib/ignition/user.ign" Oct 13 04:58:38.679671 ignition[1102]: failed to fetch config: resource requires networking Oct 13 04:58:38.679911 ignition[1102]: Ignition finished successfully Oct 13 04:58:38.723658 ignition[1108]: Ignition 2.22.0 Oct 13 04:58:38.723664 ignition[1108]: Stage: fetch Oct 13 04:58:38.723948 ignition[1108]: no configs at "/usr/lib/ignition/base.d" Oct 13 04:58:38.723957 ignition[1108]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 13 04:58:38.724034 ignition[1108]: parsed url from cmdline: "" Oct 13 04:58:38.724037 ignition[1108]: no config URL provided Oct 13 04:58:38.724041 ignition[1108]: reading system config file "/usr/lib/ignition/user.ign" Oct 13 04:58:38.724048 ignition[1108]: no config at "/usr/lib/ignition/user.ign" Oct 13 04:58:38.724064 ignition[1108]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Oct 13 04:58:38.855862 ignition[1108]: GET result: OK Oct 13 04:58:38.858264 ignition[1108]: config has been read from IMDS userdata Oct 13 04:58:38.858306 ignition[1108]: parsing config with SHA512: 34b8b65fc9afad021f5f287de2cf34af0470636fceaaf5aba05225bdd0f9e4361cb0e206f0bee9dbbea50c84fcac92ea075ed38b4d661129e53cbc1ca59dc001 Oct 13 04:58:38.861749 unknown[1108]: fetched base config from "system" Oct 13 04:58:38.861754 unknown[1108]: fetched base config from "system" Oct 13 04:58:38.862014 ignition[1108]: fetch: fetch complete Oct 13 04:58:38.861757 unknown[1108]: fetched user config from "azure" Oct 13 04:58:38.862018 ignition[1108]: fetch: fetch passed Oct 13 04:58:38.866662 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 13 04:58:38.862056 ignition[1108]: Ignition finished successfully Oct 13 04:58:38.873823 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 13 04:58:38.910029 ignition[1114]: Ignition 2.22.0 Oct 13 04:58:38.910045 ignition[1114]: Stage: kargs Oct 13 04:58:38.914899 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 13 04:58:38.910217 ignition[1114]: no configs at "/usr/lib/ignition/base.d" Oct 13 04:58:38.921096 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 13 04:58:38.910224 ignition[1114]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 13 04:58:38.910778 ignition[1114]: kargs: kargs passed Oct 13 04:58:38.910830 ignition[1114]: Ignition finished successfully Oct 13 04:58:38.955938 ignition[1120]: Ignition 2.22.0 Oct 13 04:58:38.955953 ignition[1120]: Stage: disks Oct 13 04:58:38.959342 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 13 04:58:38.956126 ignition[1120]: no configs at "/usr/lib/ignition/base.d" Oct 13 04:58:38.964332 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 13 04:58:38.956133 ignition[1120]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 13 04:58:38.970215 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 13 04:58:38.956722 ignition[1120]: disks: disks passed Oct 13 04:58:38.978559 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 04:58:38.956773 ignition[1120]: Ignition finished successfully Oct 13 04:58:38.985633 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 04:58:38.994055 systemd[1]: Reached target basic.target - Basic System. Oct 13 04:58:39.002939 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 13 04:58:39.141921 systemd-fsck[1128]: ROOT: clean, 15/7340400 files, 470001/7359488 blocks Oct 13 04:58:39.151673 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 13 04:58:39.162930 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 13 04:58:40.830517 kernel: EXT4-fs (sda9): mounted filesystem a42694d5-feb9-4394-9ac1-a45818242d2d r/w with ordered data mode. Quota mode: none. Oct 13 04:58:40.831251 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 13 04:58:40.838092 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 13 04:58:40.889973 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 04:58:40.908167 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 13 04:58:40.920111 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Oct 13 04:58:40.929757 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 13 04:58:40.930580 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 04:58:40.965829 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1142) Oct 13 04:58:40.965852 kernel: BTRFS info (device sda6): first mount of filesystem e9d5eae2-c289-4bda-a378-1699d81be8dc Oct 13 04:58:40.965861 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Oct 13 04:58:40.948763 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 13 04:58:40.958626 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 13 04:58:40.987885 kernel: BTRFS info (device sda6): turning on async discard Oct 13 04:58:40.987933 kernel: BTRFS info (device sda6): enabling free space tree Oct 13 04:58:40.989043 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 04:58:41.626645 coreos-metadata[1144]: Oct 13 04:58:41.626 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Oct 13 04:58:41.634662 coreos-metadata[1144]: Oct 13 04:58:41.634 INFO Fetch successful Oct 13 04:58:41.638757 coreos-metadata[1144]: Oct 13 04:58:41.638 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Oct 13 04:58:41.655629 coreos-metadata[1144]: Oct 13 04:58:41.655 INFO Fetch successful Oct 13 04:58:41.668701 coreos-metadata[1144]: Oct 13 04:58:41.668 INFO wrote hostname ci-4487.0.0-a-bf8a300537 to /sysroot/etc/hostname Oct 13 04:58:41.675794 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 13 04:58:42.005889 initrd-setup-root[1173]: cut: /sysroot/etc/passwd: No such file or directory Oct 13 04:58:42.127884 initrd-setup-root[1180]: cut: /sysroot/etc/group: No such file or directory Oct 13 04:58:42.146118 initrd-setup-root[1187]: cut: /sysroot/etc/shadow: No such file or directory Oct 13 04:58:42.151339 initrd-setup-root[1194]: cut: /sysroot/etc/gshadow: No such file or directory Oct 13 04:58:43.099453 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 13 04:58:43.104906 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 13 04:58:43.122674 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 13 04:58:43.146701 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 13 04:58:43.156552 kernel: BTRFS info (device sda6): last unmount of filesystem e9d5eae2-c289-4bda-a378-1699d81be8dc Oct 13 04:58:43.168518 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 13 04:58:43.184422 ignition[1263]: INFO : Ignition 2.22.0 Oct 13 04:58:43.184422 ignition[1263]: INFO : Stage: mount Oct 13 04:58:43.190958 ignition[1263]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 04:58:43.190958 ignition[1263]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 13 04:58:43.190958 ignition[1263]: INFO : mount: mount passed Oct 13 04:58:43.190958 ignition[1263]: INFO : Ignition finished successfully Oct 13 04:58:43.189876 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 13 04:58:43.196198 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 13 04:58:43.223892 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 04:58:43.250543 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1274) Oct 13 04:58:43.259664 kernel: BTRFS info (device sda6): first mount of filesystem e9d5eae2-c289-4bda-a378-1699d81be8dc Oct 13 04:58:43.259705 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Oct 13 04:58:43.268027 kernel: BTRFS info (device sda6): turning on async discard Oct 13 04:58:43.268074 kernel: BTRFS info (device sda6): enabling free space tree Oct 13 04:58:43.269586 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 04:58:43.301165 ignition[1290]: INFO : Ignition 2.22.0 Oct 13 04:58:43.301165 ignition[1290]: INFO : Stage: files Oct 13 04:58:43.307473 ignition[1290]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 04:58:43.307473 ignition[1290]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 13 04:58:43.307473 ignition[1290]: DEBUG : files: compiled without relabeling support, skipping Oct 13 04:58:43.328548 ignition[1290]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 13 04:58:43.328548 ignition[1290]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 13 04:58:43.405870 ignition[1290]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 13 04:58:43.411152 ignition[1290]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 13 04:58:43.411152 ignition[1290]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 13 04:58:43.406238 unknown[1290]: wrote ssh authorized keys file for user: core Oct 13 04:58:43.432743 ignition[1290]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Oct 13 04:58:43.440326 ignition[1290]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Oct 13 04:58:43.471166 ignition[1290]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 13 04:58:43.534505 ignition[1290]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Oct 13 04:58:43.534505 ignition[1290]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 13 04:58:43.551387 ignition[1290]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 13 04:58:43.551387 ignition[1290]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 13 04:58:43.551387 ignition[1290]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 13 04:58:43.551387 ignition[1290]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 04:58:43.551387 ignition[1290]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 04:58:43.551387 ignition[1290]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 04:58:43.551387 ignition[1290]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 04:58:43.551387 ignition[1290]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 04:58:43.551387 ignition[1290]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 04:58:43.551387 ignition[1290]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Oct 13 04:58:43.635621 ignition[1290]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Oct 13 04:58:43.635621 ignition[1290]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Oct 13 04:58:43.635621 ignition[1290]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Oct 13 04:58:43.909333 ignition[1290]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 13 04:58:44.332817 ignition[1290]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Oct 13 04:58:44.332817 ignition[1290]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 13 04:58:44.513989 ignition[1290]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 04:58:44.522210 ignition[1290]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 04:58:44.522210 ignition[1290]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 13 04:58:44.522210 ignition[1290]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Oct 13 04:58:44.522210 ignition[1290]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Oct 13 04:58:44.522210 ignition[1290]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 13 04:58:44.522210 ignition[1290]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 13 04:58:44.522210 ignition[1290]: INFO : files: files passed Oct 13 04:58:44.522210 ignition[1290]: INFO : Ignition finished successfully Oct 13 04:58:44.530243 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 13 04:58:44.540178 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 13 04:58:44.562073 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 13 04:58:44.576563 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 13 04:58:44.585798 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 13 04:58:44.617487 initrd-setup-root-after-ignition[1320]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 04:58:44.617487 initrd-setup-root-after-ignition[1320]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 13 04:58:44.629700 initrd-setup-root-after-ignition[1324]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 04:58:44.623899 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 04:58:44.634795 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 13 04:58:44.646560 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 13 04:58:44.697124 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 13 04:58:44.697213 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 13 04:58:44.702613 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 13 04:58:44.709976 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 13 04:58:44.719276 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 13 04:58:44.720167 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 13 04:58:44.755560 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 04:58:44.763558 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 13 04:58:44.786817 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 13 04:58:44.786973 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 13 04:58:44.796271 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 04:58:44.805288 systemd[1]: Stopped target timers.target - Timer Units. Oct 13 04:58:44.813419 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 13 04:58:44.813553 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 04:58:44.824912 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 13 04:58:44.829068 systemd[1]: Stopped target basic.target - Basic System. Oct 13 04:58:44.836876 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 13 04:58:44.845045 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 04:58:44.853254 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 13 04:58:44.861518 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 13 04:58:44.870320 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 13 04:58:44.878229 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 04:58:44.887093 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 13 04:58:44.894758 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 13 04:58:44.903498 systemd[1]: Stopped target swap.target - Swaps. Oct 13 04:58:44.910152 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 13 04:58:44.910270 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 13 04:58:44.921058 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 13 04:58:44.925254 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 04:58:44.933689 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 13 04:58:44.933750 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 04:58:44.942359 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 13 04:58:44.942469 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 13 04:58:44.954659 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 13 04:58:44.954743 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 04:58:44.959649 systemd[1]: ignition-files.service: Deactivated successfully. Oct 13 04:58:44.959713 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 13 04:58:44.967290 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Oct 13 04:58:44.967363 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 13 04:58:45.031534 ignition[1344]: INFO : Ignition 2.22.0 Oct 13 04:58:45.031534 ignition[1344]: INFO : Stage: umount Oct 13 04:58:45.031534 ignition[1344]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 04:58:45.031534 ignition[1344]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 13 04:58:45.031534 ignition[1344]: INFO : umount: umount passed Oct 13 04:58:45.031534 ignition[1344]: INFO : Ignition finished successfully Oct 13 04:58:44.978042 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 13 04:58:45.004719 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 13 04:58:45.019304 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 13 04:58:45.019440 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 04:58:45.026818 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 13 04:58:45.026923 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 04:58:45.036202 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 13 04:58:45.036291 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 04:58:45.046470 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 13 04:58:45.048503 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 13 04:58:45.061727 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 13 04:58:45.063443 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 13 04:58:45.070329 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 13 04:58:45.070418 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 13 04:58:45.076113 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 13 04:58:45.076155 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 13 04:58:45.083692 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 13 04:58:45.083744 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 13 04:58:45.091509 systemd[1]: Stopped target network.target - Network. Oct 13 04:58:45.098618 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 13 04:58:45.098683 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 04:58:45.108220 systemd[1]: Stopped target paths.target - Path Units. Oct 13 04:58:45.116635 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 13 04:58:45.120689 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 04:58:45.135628 systemd[1]: Stopped target slices.target - Slice Units. Oct 13 04:58:45.139173 systemd[1]: Stopped target sockets.target - Socket Units. Oct 13 04:58:45.146729 systemd[1]: iscsid.socket: Deactivated successfully. Oct 13 04:58:45.146784 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 04:58:45.154544 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 13 04:58:45.154579 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 04:58:45.162253 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 13 04:58:45.162313 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 13 04:58:45.169556 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 13 04:58:45.169592 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 13 04:58:45.178378 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 13 04:58:45.185395 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 13 04:58:45.193815 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 13 04:58:45.194531 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 13 04:58:45.194620 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 13 04:58:45.207625 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 13 04:58:45.207717 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 13 04:58:45.224201 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 13 04:58:45.224315 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 13 04:58:45.392300 kernel: hv_netvsc 0022487b-3886-0022-487b-38860022487b eth0: Data path switched from VF: enP64445s1 Oct 13 04:58:45.237684 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 13 04:58:45.244412 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 13 04:58:45.244460 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 13 04:58:45.254861 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 13 04:58:45.254924 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 13 04:58:45.263203 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 13 04:58:45.272162 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 13 04:58:45.272231 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 04:58:45.280080 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 13 04:58:45.280133 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 13 04:58:45.287497 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 13 04:58:45.287533 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 13 04:58:45.295374 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 04:58:45.327063 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 13 04:58:45.327402 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 04:58:45.336822 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 13 04:58:45.336872 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 13 04:58:45.344732 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 13 04:58:45.344760 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 04:58:45.352302 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 13 04:58:45.352361 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 13 04:58:45.362851 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 13 04:58:45.362899 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 13 04:58:45.380203 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 13 04:58:45.380258 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 04:58:45.396661 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 13 04:58:45.411517 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 13 04:58:45.411608 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 04:58:45.419947 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 13 04:58:45.420006 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 04:58:45.424681 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 13 04:58:45.424731 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 04:58:45.436113 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 13 04:58:45.436172 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 04:58:45.444469 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 04:58:45.444532 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 04:58:45.453959 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 13 04:58:45.454054 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 13 04:58:45.612670 systemd-journald[476]: Received SIGTERM from PID 1 (systemd). Oct 13 04:58:45.462137 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 13 04:58:45.462198 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 13 04:58:45.470788 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 13 04:58:45.478801 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 13 04:58:45.524031 systemd[1]: Switching root. Oct 13 04:58:45.632863 systemd-journald[476]: Journal stopped Oct 13 04:58:52.990376 kernel: SELinux: policy capability network_peer_controls=1 Oct 13 04:58:52.990396 kernel: SELinux: policy capability open_perms=1 Oct 13 04:58:52.990405 kernel: SELinux: policy capability extended_socket_class=1 Oct 13 04:58:52.990413 kernel: SELinux: policy capability always_check_network=0 Oct 13 04:58:52.990418 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 13 04:58:52.990424 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 13 04:58:52.990430 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 13 04:58:52.990436 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 13 04:58:52.990441 kernel: SELinux: policy capability userspace_initial_context=0 Oct 13 04:58:52.990448 kernel: audit: type=1403 audit(1760331526.755:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 13 04:58:52.990455 systemd[1]: Successfully loaded SELinux policy in 181.033ms. Oct 13 04:58:52.990462 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.495ms. Oct 13 04:58:52.990469 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 04:58:52.991514 systemd[1]: Detected virtualization microsoft. Oct 13 04:58:52.991543 systemd[1]: Detected architecture arm64. Oct 13 04:58:52.991551 systemd[1]: Detected first boot. Oct 13 04:58:52.991559 systemd[1]: Hostname set to . Oct 13 04:58:52.991566 systemd[1]: Initializing machine ID from random generator. Oct 13 04:58:52.991574 zram_generator::config[1386]: No configuration found. Oct 13 04:58:52.991581 kernel: NET: Registered PF_VSOCK protocol family Oct 13 04:58:52.991588 systemd[1]: Populated /etc with preset unit settings. Oct 13 04:58:52.991597 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 13 04:58:52.991603 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 13 04:58:52.991610 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 13 04:58:52.991617 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 13 04:58:52.991625 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 13 04:58:52.991632 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 13 04:58:52.991638 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 13 04:58:52.991645 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 13 04:58:52.991651 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 13 04:58:52.991659 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 13 04:58:52.991665 systemd[1]: Created slice user.slice - User and Session Slice. Oct 13 04:58:52.991672 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 04:58:52.991678 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 04:58:52.991685 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 13 04:58:52.991691 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 13 04:58:52.991698 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 13 04:58:52.991706 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 04:58:52.991712 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 13 04:58:52.991723 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 04:58:52.991731 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 04:58:52.991738 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 13 04:58:52.991745 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 13 04:58:52.991753 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 13 04:58:52.991760 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 13 04:58:52.991766 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 04:58:52.991773 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 04:58:52.991780 systemd[1]: Reached target slices.target - Slice Units. Oct 13 04:58:52.991786 systemd[1]: Reached target swap.target - Swaps. Oct 13 04:58:52.991793 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 13 04:58:52.991801 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 13 04:58:52.991807 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 13 04:58:52.991814 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 04:58:52.991822 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 04:58:52.991828 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 04:58:52.991835 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 13 04:58:52.991842 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 13 04:58:52.991848 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 13 04:58:52.991855 systemd[1]: Mounting media.mount - External Media Directory... Oct 13 04:58:52.991861 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 13 04:58:52.991869 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 13 04:58:52.991876 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 13 04:58:52.991883 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 13 04:58:52.991890 systemd[1]: Reached target machines.target - Containers. Oct 13 04:58:52.991897 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 13 04:58:52.991904 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 04:58:52.991910 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 04:58:52.991918 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 13 04:58:52.991925 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 04:58:52.991932 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 04:58:52.991938 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 04:58:52.991945 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 13 04:58:52.991952 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 04:58:52.991959 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 13 04:58:52.991966 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 13 04:58:52.991973 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 13 04:58:52.991980 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 13 04:58:52.991986 systemd[1]: Stopped systemd-fsck-usr.service. Oct 13 04:58:52.991993 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 04:58:52.992000 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 04:58:52.992007 kernel: fuse: init (API version 7.41) Oct 13 04:58:52.992014 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 04:58:52.992020 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 04:58:52.992027 kernel: ACPI: bus type drm_connector registered Oct 13 04:58:52.992034 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 13 04:58:52.992041 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 13 04:58:52.992047 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 04:58:52.992081 systemd-journald[1466]: Collecting audit messages is disabled. Oct 13 04:58:52.992096 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 13 04:58:52.992104 systemd-journald[1466]: Journal started Oct 13 04:58:52.992120 systemd-journald[1466]: Runtime Journal (/run/log/journal/dbe19b42adf1435eb65b3523d37a0bc2) is 8M, max 78.5M, 70.5M free. Oct 13 04:58:52.197579 systemd[1]: Queued start job for default target multi-user.target. Oct 13 04:58:52.211953 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Oct 13 04:58:52.212410 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 13 04:58:52.212695 systemd[1]: systemd-journald.service: Consumed 2.235s CPU time. Oct 13 04:58:53.006767 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 04:58:53.007730 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 13 04:58:53.012196 systemd[1]: Mounted media.mount - External Media Directory. Oct 13 04:58:53.017067 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 13 04:58:53.021560 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 13 04:58:53.026602 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 13 04:58:53.032216 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 04:58:53.037460 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 13 04:58:53.037737 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 13 04:58:53.044900 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 04:58:53.045177 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 04:58:53.050633 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 04:58:53.050873 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 04:58:53.057330 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 04:58:53.057816 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 04:58:53.063622 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 13 04:58:53.063859 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 13 04:58:53.070324 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 04:58:53.070578 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 04:58:53.075180 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 04:58:53.081604 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 04:58:53.088445 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 13 04:58:53.093963 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 13 04:58:53.099077 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 13 04:58:53.105062 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 04:58:53.118739 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 04:58:53.123772 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 13 04:58:53.129471 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 13 04:58:53.142516 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 13 04:58:53.146969 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 13 04:58:53.147001 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 04:58:53.152321 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 13 04:58:53.157373 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 04:58:53.158400 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 13 04:58:53.170640 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 13 04:58:53.175419 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 04:58:53.182145 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 13 04:58:53.186530 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 04:58:53.187337 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 04:58:53.192192 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 13 04:58:53.198722 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 04:58:53.204382 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 13 04:58:53.209128 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 13 04:58:53.225319 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 13 04:58:53.233072 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 13 04:58:53.239257 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 13 04:58:53.266360 systemd-journald[1466]: Time spent on flushing to /var/log/journal/dbe19b42adf1435eb65b3523d37a0bc2 is 11.134ms for 935 entries. Oct 13 04:58:53.266360 systemd-journald[1466]: System Journal (/var/log/journal/dbe19b42adf1435eb65b3523d37a0bc2) is 8M, max 2.6G, 2.6G free. Oct 13 04:58:53.399653 systemd-journald[1466]: Received client request to flush runtime journal. Oct 13 04:58:53.399724 kernel: loop1: detected capacity change from 0 to 100624 Oct 13 04:58:53.365622 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 04:58:53.400954 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 13 04:58:53.415059 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 13 04:58:53.415631 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 13 04:58:53.467638 systemd-tmpfiles[1527]: ACLs are not supported, ignoring. Oct 13 04:58:53.467650 systemd-tmpfiles[1527]: ACLs are not supported, ignoring. Oct 13 04:58:53.470585 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 04:58:53.477457 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 13 04:58:53.851497 kernel: loop2: detected capacity change from 0 to 211168 Oct 13 04:58:53.922505 kernel: loop3: detected capacity change from 0 to 119344 Oct 13 04:58:53.998861 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 13 04:58:54.092439 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 13 04:58:54.099272 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 04:58:54.104389 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 04:58:54.123541 systemd-tmpfiles[1547]: ACLs are not supported, ignoring. Oct 13 04:58:54.123552 systemd-tmpfiles[1547]: ACLs are not supported, ignoring. Oct 13 04:58:54.125892 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 04:58:54.132823 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 04:58:54.155626 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 13 04:58:54.158262 systemd-udevd[1550]: Using default interface naming scheme 'v257'. Oct 13 04:58:54.210804 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 13 04:58:54.304400 systemd-resolved[1546]: Positive Trust Anchors: Oct 13 04:58:54.304417 systemd-resolved[1546]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 04:58:54.304420 systemd-resolved[1546]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 13 04:58:54.304439 systemd-resolved[1546]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 04:58:54.425913 kernel: loop4: detected capacity change from 0 to 27760 Oct 13 04:58:54.424763 systemd-resolved[1546]: Using system hostname 'ci-4487.0.0-a-bf8a300537'. Oct 13 04:58:54.442260 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 04:58:54.447124 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 04:58:54.905299 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 04:58:54.915131 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 04:58:54.948964 kernel: loop5: detected capacity change from 0 to 100624 Oct 13 04:58:54.964601 kernel: loop6: detected capacity change from 0 to 211168 Oct 13 04:58:54.981647 kernel: loop7: detected capacity change from 0 to 119344 Oct 13 04:58:54.989607 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Oct 13 04:58:54.996494 kernel: loop1: detected capacity change from 0 to 27760 Oct 13 04:58:55.006410 (sd-merge)[1576]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-azure.raw'. Oct 13 04:58:55.009850 (sd-merge)[1576]: Merged extensions into '/usr'. Oct 13 04:58:55.016737 systemd[1]: Reload requested from client PID 1525 ('systemd-sysext') (unit systemd-sysext.service)... Oct 13 04:58:55.016752 systemd[1]: Reloading... Oct 13 04:58:55.051666 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#259 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Oct 13 04:58:55.099907 kernel: hv_vmbus: registering driver hv_balloon Oct 13 04:58:55.115051 kernel: hv_vmbus: registering driver hyperv_fb Oct 13 04:58:55.115153 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Oct 13 04:58:55.115172 kernel: hv_balloon: Memory hot add disabled on ARM64 Oct 13 04:58:55.115189 zram_generator::config[1631]: No configuration found. Oct 13 04:58:55.128747 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Oct 13 04:58:55.128843 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Oct 13 04:58:55.134776 kernel: Console: switching to colour dummy device 80x25 Oct 13 04:58:55.143270 kernel: Console: switching to colour frame buffer device 128x48 Oct 13 04:58:55.143369 kernel: mousedev: PS/2 mouse device common for all mice Oct 13 04:58:55.151145 systemd-networkd[1564]: lo: Link UP Oct 13 04:58:55.151155 systemd-networkd[1564]: lo: Gained carrier Oct 13 04:58:55.152369 systemd-networkd[1564]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 04:58:55.152376 systemd-networkd[1564]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 04:58:55.220720 kernel: mlx5_core fbbd:00:02.0 enP64445s1: Link up Oct 13 04:58:55.259528 kernel: hv_netvsc 0022487b-3886-0022-487b-38860022487b eth0: Data path switched to VF: enP64445s1 Oct 13 04:58:55.259519 systemd-networkd[1564]: enP64445s1: Link UP Oct 13 04:58:55.259683 systemd-networkd[1564]: eth0: Link UP Oct 13 04:58:55.259686 systemd-networkd[1564]: eth0: Gained carrier Oct 13 04:58:55.259724 systemd-networkd[1564]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 04:58:55.271013 systemd-networkd[1564]: enP64445s1: Gained carrier Oct 13 04:58:55.272470 systemd-networkd[1564]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 04:58:55.278681 systemd-networkd[1564]: eth0: DHCPv4 address 10.200.20.16/24, gateway 10.200.20.1 acquired from 168.63.129.16 Oct 13 04:58:55.376259 systemd[1]: Reloading finished in 359 ms. Oct 13 04:58:55.400681 kernel: MACsec IEEE 802.1AE Oct 13 04:58:55.398173 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 04:58:55.404426 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 13 04:58:55.427800 systemd[1]: Reached target network.target - Network. Oct 13 04:58:55.442246 systemd[1]: Starting ensure-sysext.service... Oct 13 04:58:55.448648 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 13 04:58:55.458719 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 13 04:58:55.467795 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 04:58:55.476720 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 04:58:55.520770 systemd-tmpfiles[1757]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 13 04:58:55.520793 systemd-tmpfiles[1757]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 13 04:58:55.520993 systemd-tmpfiles[1757]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 13 04:58:55.521130 systemd-tmpfiles[1757]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 13 04:58:55.523398 systemd[1]: Reload requested from client PID 1741 ('systemctl') (unit ensure-sysext.service)... Oct 13 04:58:55.523411 systemd[1]: Reloading... Oct 13 04:58:55.523779 systemd-tmpfiles[1757]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 13 04:58:55.524030 systemd-tmpfiles[1757]: ACLs are not supported, ignoring. Oct 13 04:58:55.524165 systemd-tmpfiles[1757]: ACLs are not supported, ignoring. Oct 13 04:58:55.584523 zram_generator::config[1802]: No configuration found. Oct 13 04:58:55.622145 systemd-tmpfiles[1757]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 04:58:55.622155 systemd-tmpfiles[1757]: Skipping /boot Oct 13 04:58:55.627993 systemd-tmpfiles[1757]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 04:58:55.628118 systemd-tmpfiles[1757]: Skipping /boot Oct 13 04:58:55.743946 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Oct 13 04:58:55.748826 systemd[1]: Reloading finished in 224 ms. Oct 13 04:58:55.761507 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 13 04:58:55.785571 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 04:58:55.803206 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 04:58:55.821680 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 13 04:58:55.827717 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 13 04:58:55.832943 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 13 04:58:55.838748 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 13 04:58:55.845796 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 13 04:58:55.855066 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 04:58:55.857320 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 04:58:55.864444 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 04:58:55.872327 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 04:58:55.876468 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 04:58:55.876948 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 04:58:55.879185 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 04:58:55.879442 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 04:58:55.885189 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 04:58:55.885518 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 04:58:55.891822 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 13 04:58:55.897303 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 04:58:55.897443 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 04:58:55.909361 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 04:58:55.910676 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 04:58:55.915949 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 04:58:55.924637 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 04:58:55.928696 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 04:58:55.928790 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 04:58:55.929515 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 04:58:55.931416 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 04:58:55.936818 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 04:58:55.937066 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 04:58:55.942739 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 04:58:55.942977 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 04:58:55.958147 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 13 04:58:55.964615 systemd[1]: Finished ensure-sysext.service. Oct 13 04:58:55.972343 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 04:58:55.973415 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 04:58:55.984738 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 04:58:55.991648 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 04:58:55.999532 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 04:58:56.004242 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 04:58:56.004294 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 04:58:56.004330 systemd[1]: Reached target time-set.target - System Time Set. Oct 13 04:58:56.010375 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 04:58:56.010569 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 04:58:56.015302 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 04:58:56.015456 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 04:58:56.020078 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 04:58:56.020223 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 04:58:56.025640 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 04:58:56.025783 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 04:58:56.032669 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 04:58:56.032740 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 04:58:56.049219 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 04:58:56.049496 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 04:58:56.057128 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 04:58:56.149840 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 13 04:58:56.325352 augenrules[1915]: No rules Oct 13 04:58:56.326695 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 04:58:56.326902 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 04:58:56.494769 systemd-networkd[1564]: eth0: Gained IPv6LL Oct 13 04:58:56.496762 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 13 04:58:56.502671 systemd[1]: Reached target network-online.target - Network is Online. Oct 13 04:58:57.261380 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 04:58:58.415885 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 13 04:58:58.421047 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 13 04:59:04.159982 ldconfig[1863]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 13 04:59:04.175984 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 13 04:59:04.182155 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 13 04:59:04.207585 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 13 04:59:04.212675 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 04:59:04.216978 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 13 04:59:04.221873 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 13 04:59:04.227327 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 13 04:59:04.231675 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 13 04:59:04.236550 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 13 04:59:04.241565 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 13 04:59:04.241594 systemd[1]: Reached target paths.target - Path Units. Oct 13 04:59:04.245163 systemd[1]: Reached target timers.target - Timer Units. Oct 13 04:59:04.275863 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 13 04:59:04.281594 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 13 04:59:04.286813 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 13 04:59:04.292027 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 13 04:59:04.297076 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 13 04:59:04.302790 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 13 04:59:04.319724 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 13 04:59:04.325079 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 13 04:59:04.329264 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 04:59:04.332989 systemd[1]: Reached target basic.target - Basic System. Oct 13 04:59:04.336821 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 13 04:59:04.336841 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 13 04:59:04.352121 systemd[1]: Starting chronyd.service - NTP client/server... Oct 13 04:59:04.363586 systemd[1]: Starting containerd.service - containerd container runtime... Oct 13 04:59:04.368648 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 13 04:59:04.375599 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 13 04:59:04.382223 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 13 04:59:04.391778 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 13 04:59:04.396982 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 13 04:59:04.401517 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 13 04:59:04.402712 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Oct 13 04:59:04.407621 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Oct 13 04:59:04.416256 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 04:59:04.424349 KVP[1941]: KVP starting; pid is:1941 Oct 13 04:59:04.424886 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 13 04:59:04.426149 chronyd[1931]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Oct 13 04:59:04.432667 KVP[1941]: KVP LIC Version: 3.1 Oct 13 04:59:04.433505 kernel: hv_utils: KVP IC version 4.0 Oct 13 04:59:04.434764 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 13 04:59:04.442371 jq[1936]: false Oct 13 04:59:04.443671 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 13 04:59:04.448707 chronyd[1931]: Timezone right/UTC failed leap second check, ignoring Oct 13 04:59:04.448887 chronyd[1931]: Loaded seccomp filter (level 2) Oct 13 04:59:04.449730 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 13 04:59:04.458764 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 13 04:59:04.465038 extend-filesystems[1940]: Found /dev/sda6 Oct 13 04:59:04.473271 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 13 04:59:04.479457 extend-filesystems[1940]: Found /dev/sda9 Oct 13 04:59:04.486216 extend-filesystems[1940]: Checking size of /dev/sda9 Oct 13 04:59:04.479925 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 13 04:59:04.480795 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 13 04:59:04.481884 systemd[1]: Starting update-engine.service - Update Engine... Oct 13 04:59:04.496608 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 13 04:59:04.501988 systemd[1]: Started chronyd.service - NTP client/server. Oct 13 04:59:04.509539 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 13 04:59:04.511425 jq[1963]: true Oct 13 04:59:04.515776 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 13 04:59:04.515978 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 13 04:59:04.517926 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 13 04:59:04.518097 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 13 04:59:04.547861 jq[1979]: true Oct 13 04:59:04.547687 systemd[1]: motdgen.service: Deactivated successfully. Oct 13 04:59:04.547880 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 13 04:59:04.550768 (ntainerd)[1982]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 13 04:59:04.553342 extend-filesystems[1940]: Resized partition /dev/sda9 Oct 13 04:59:04.559928 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 13 04:59:04.567805 update_engine[1957]: I20251013 04:59:04.567728 1957 main.cc:92] Flatcar Update Engine starting Oct 13 04:59:04.590414 extend-filesystems[2002]: resize2fs 1.47.3 (8-Jul-2025) Oct 13 04:59:04.608523 kernel: EXT4-fs (sda9): resizing filesystem from 7359488 to 7376891 blocks Oct 13 04:59:04.620095 systemd-logind[1953]: New seat seat0. Oct 13 04:59:04.624158 tar[1972]: linux-arm64/LICENSE Oct 13 04:59:04.624158 tar[1972]: linux-arm64/helm Oct 13 04:59:04.624071 systemd-logind[1953]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Oct 13 04:59:04.626816 systemd[1]: Started systemd-logind.service - User Login Management. Oct 13 04:59:04.639494 kernel: EXT4-fs (sda9): resized filesystem to 7376891 Oct 13 04:59:04.686536 extend-filesystems[2002]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Oct 13 04:59:04.686536 extend-filesystems[2002]: old_desc_blocks = 4, new_desc_blocks = 4 Oct 13 04:59:04.686536 extend-filesystems[2002]: The filesystem on /dev/sda9 is now 7376891 (4k) blocks long. Oct 13 04:59:04.726230 extend-filesystems[1940]: Resized filesystem in /dev/sda9 Oct 13 04:59:04.735757 bash[2018]: Updated "/home/core/.ssh/authorized_keys" Oct 13 04:59:04.692201 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 13 04:59:04.693266 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 13 04:59:04.717322 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 13 04:59:04.730864 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 13 04:59:04.800831 sshd_keygen[1980]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 13 04:59:04.827899 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 13 04:59:04.835630 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 13 04:59:04.851982 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Oct 13 04:59:04.863067 systemd[1]: issuegen.service: Deactivated successfully. Oct 13 04:59:04.866328 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 13 04:59:04.873352 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 13 04:59:04.893389 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Oct 13 04:59:04.909079 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 13 04:59:04.914359 dbus-daemon[1934]: [system] SELinux support is enabled Oct 13 04:59:04.915711 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 13 04:59:04.923723 update_engine[1957]: I20251013 04:59:04.921924 1957 update_check_scheduler.cc:74] Next update check in 5m18s Oct 13 04:59:04.925833 dbus-daemon[1934]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 13 04:59:04.926302 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 13 04:59:04.934908 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 13 04:59:04.941896 systemd[1]: Reached target getty.target - Login Prompts. Oct 13 04:59:04.948856 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 13 04:59:04.948882 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 13 04:59:04.957532 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 13 04:59:04.957553 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 13 04:59:04.962818 systemd[1]: Started update-engine.service - Update Engine. Oct 13 04:59:04.970734 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 13 04:59:04.980407 coreos-metadata[1933]: Oct 13 04:59:04.980 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Oct 13 04:59:04.985549 coreos-metadata[1933]: Oct 13 04:59:04.984 INFO Fetch successful Oct 13 04:59:04.985636 coreos-metadata[1933]: Oct 13 04:59:04.985 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Oct 13 04:59:04.989421 coreos-metadata[1933]: Oct 13 04:59:04.989 INFO Fetch successful Oct 13 04:59:04.990200 coreos-metadata[1933]: Oct 13 04:59:04.990 INFO Fetching http://168.63.129.16/machine/839c27da-229d-471c-ab82-95d7e2fe5591/0d1c8465%2D97a8%2D4806%2D8701%2D73e8cbe02d50.%5Fci%2D4487.0.0%2Da%2Dbf8a300537?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Oct 13 04:59:04.992381 coreos-metadata[1933]: Oct 13 04:59:04.992 INFO Fetch successful Oct 13 04:59:04.992950 coreos-metadata[1933]: Oct 13 04:59:04.992 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Oct 13 04:59:05.004997 coreos-metadata[1933]: Oct 13 04:59:05.004 INFO Fetch successful Oct 13 04:59:05.031713 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 13 04:59:05.037979 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 13 04:59:05.103770 tar[1972]: linux-arm64/README.md Oct 13 04:59:05.120602 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 13 04:59:05.129485 locksmithd[2119]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 13 04:59:05.427563 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 04:59:05.432672 (kubelet)[2142]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 04:59:05.464876 containerd[1982]: time="2025-10-13T04:59:05Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 13 04:59:05.465752 containerd[1982]: time="2025-10-13T04:59:05.465720160Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 13 04:59:05.474815 containerd[1982]: time="2025-10-13T04:59:05.474777448Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.24µs" Oct 13 04:59:05.474815 containerd[1982]: time="2025-10-13T04:59:05.474810200Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 13 04:59:05.474815 containerd[1982]: time="2025-10-13T04:59:05.474824720Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 13 04:59:05.475512 containerd[1982]: time="2025-10-13T04:59:05.475470952Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 13 04:59:05.475531 containerd[1982]: time="2025-10-13T04:59:05.475513720Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 13 04:59:05.475544 containerd[1982]: time="2025-10-13T04:59:05.475536744Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 04:59:05.475609 containerd[1982]: time="2025-10-13T04:59:05.475590144Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 04:59:05.475609 containerd[1982]: time="2025-10-13T04:59:05.475602224Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 04:59:05.476208 containerd[1982]: time="2025-10-13T04:59:05.476181632Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 04:59:05.476243 containerd[1982]: time="2025-10-13T04:59:05.476207456Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 04:59:05.476243 containerd[1982]: time="2025-10-13T04:59:05.476217576Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 04:59:05.476243 containerd[1982]: time="2025-10-13T04:59:05.476223688Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 13 04:59:05.476711 containerd[1982]: time="2025-10-13T04:59:05.476689920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 13 04:59:05.477268 containerd[1982]: time="2025-10-13T04:59:05.476879280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 04:59:05.477301 containerd[1982]: time="2025-10-13T04:59:05.477287200Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 04:59:05.477319 containerd[1982]: time="2025-10-13T04:59:05.477299896Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 13 04:59:05.477367 containerd[1982]: time="2025-10-13T04:59:05.477331728Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 13 04:59:05.477942 containerd[1982]: time="2025-10-13T04:59:05.477920536Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 13 04:59:05.478016 containerd[1982]: time="2025-10-13T04:59:05.478001112Z" level=info msg="metadata content store policy set" policy=shared Oct 13 04:59:05.496900 containerd[1982]: time="2025-10-13T04:59:05.496851984Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 13 04:59:05.496979 containerd[1982]: time="2025-10-13T04:59:05.496927312Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 13 04:59:05.496979 containerd[1982]: time="2025-10-13T04:59:05.496938664Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 13 04:59:05.496979 containerd[1982]: time="2025-10-13T04:59:05.496947368Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 13 04:59:05.496979 containerd[1982]: time="2025-10-13T04:59:05.496955416Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 13 04:59:05.496979 containerd[1982]: time="2025-10-13T04:59:05.496965048Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 13 04:59:05.496979 containerd[1982]: time="2025-10-13T04:59:05.496973544Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 13 04:59:05.496979 containerd[1982]: time="2025-10-13T04:59:05.496981304Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 13 04:59:05.497092 containerd[1982]: time="2025-10-13T04:59:05.496989392Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 13 04:59:05.497092 containerd[1982]: time="2025-10-13T04:59:05.496996256Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 13 04:59:05.497092 containerd[1982]: time="2025-10-13T04:59:05.497001632Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 13 04:59:05.497092 containerd[1982]: time="2025-10-13T04:59:05.497010152Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 13 04:59:05.497170 containerd[1982]: time="2025-10-13T04:59:05.497148816Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 13 04:59:05.497191 containerd[1982]: time="2025-10-13T04:59:05.497171304Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 13 04:59:05.497191 containerd[1982]: time="2025-10-13T04:59:05.497185888Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 13 04:59:05.497263 containerd[1982]: time="2025-10-13T04:59:05.497192496Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 13 04:59:05.497263 containerd[1982]: time="2025-10-13T04:59:05.497208160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 13 04:59:05.497263 containerd[1982]: time="2025-10-13T04:59:05.497216064Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 13 04:59:05.497263 containerd[1982]: time="2025-10-13T04:59:05.497223872Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 13 04:59:05.497263 containerd[1982]: time="2025-10-13T04:59:05.497230496Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 13 04:59:05.497263 containerd[1982]: time="2025-10-13T04:59:05.497237480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 13 04:59:05.497263 containerd[1982]: time="2025-10-13T04:59:05.497244312Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 13 04:59:05.497263 containerd[1982]: time="2025-10-13T04:59:05.497251480Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 13 04:59:05.497376 containerd[1982]: time="2025-10-13T04:59:05.497309960Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 13 04:59:05.497376 containerd[1982]: time="2025-10-13T04:59:05.497321512Z" level=info msg="Start snapshots syncer" Oct 13 04:59:05.497376 containerd[1982]: time="2025-10-13T04:59:05.497344096Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 13 04:59:05.497592 containerd[1982]: time="2025-10-13T04:59:05.497560752Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 13 04:59:05.497696 containerd[1982]: time="2025-10-13T04:59:05.497604920Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 13 04:59:05.497696 containerd[1982]: time="2025-10-13T04:59:05.497662328Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 13 04:59:05.497787 containerd[1982]: time="2025-10-13T04:59:05.497766776Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 13 04:59:05.497787 containerd[1982]: time="2025-10-13T04:59:05.497782768Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 13 04:59:05.497812 containerd[1982]: time="2025-10-13T04:59:05.497796200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 13 04:59:05.497812 containerd[1982]: time="2025-10-13T04:59:05.497804640Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 13 04:59:05.497812 containerd[1982]: time="2025-10-13T04:59:05.497812448Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 13 04:59:05.497846 containerd[1982]: time="2025-10-13T04:59:05.497819528Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 13 04:59:05.497846 containerd[1982]: time="2025-10-13T04:59:05.497826704Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 13 04:59:05.497877 containerd[1982]: time="2025-10-13T04:59:05.497846808Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 13 04:59:05.497877 containerd[1982]: time="2025-10-13T04:59:05.497855184Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 13 04:59:05.497877 containerd[1982]: time="2025-10-13T04:59:05.497865056Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 13 04:59:05.497995 containerd[1982]: time="2025-10-13T04:59:05.497895992Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 04:59:05.497995 containerd[1982]: time="2025-10-13T04:59:05.497907536Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 04:59:05.497995 containerd[1982]: time="2025-10-13T04:59:05.497912800Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 04:59:05.497995 containerd[1982]: time="2025-10-13T04:59:05.497918512Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 04:59:05.497995 containerd[1982]: time="2025-10-13T04:59:05.497923216Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 13 04:59:05.497995 containerd[1982]: time="2025-10-13T04:59:05.497928848Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 13 04:59:05.497995 containerd[1982]: time="2025-10-13T04:59:05.497935584Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 13 04:59:05.497995 containerd[1982]: time="2025-10-13T04:59:05.497948560Z" level=info msg="runtime interface created" Oct 13 04:59:05.497995 containerd[1982]: time="2025-10-13T04:59:05.497951888Z" level=info msg="created NRI interface" Oct 13 04:59:05.497995 containerd[1982]: time="2025-10-13T04:59:05.497957072Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 13 04:59:05.497995 containerd[1982]: time="2025-10-13T04:59:05.497965440Z" level=info msg="Connect containerd service" Oct 13 04:59:05.498129 containerd[1982]: time="2025-10-13T04:59:05.498004784Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 13 04:59:05.499283 containerd[1982]: time="2025-10-13T04:59:05.498828968Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 13 04:59:05.806086 kubelet[2142]: E1013 04:59:05.805978 2142 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 04:59:05.809346 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 04:59:05.809459 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 04:59:05.809991 systemd[1]: kubelet.service: Consumed 555ms CPU time, 257.5M memory peak. Oct 13 04:59:05.815126 containerd[1982]: time="2025-10-13T04:59:05.815077584Z" level=info msg="Start subscribing containerd event" Oct 13 04:59:05.815193 containerd[1982]: time="2025-10-13T04:59:05.815153304Z" level=info msg="Start recovering state" Oct 13 04:59:05.815245 containerd[1982]: time="2025-10-13T04:59:05.815229568Z" level=info msg="Start event monitor" Oct 13 04:59:05.815259 containerd[1982]: time="2025-10-13T04:59:05.815246968Z" level=info msg="Start cni network conf syncer for default" Oct 13 04:59:05.815259 containerd[1982]: time="2025-10-13T04:59:05.815253920Z" level=info msg="Start streaming server" Oct 13 04:59:05.815282 containerd[1982]: time="2025-10-13T04:59:05.815260992Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 13 04:59:05.815282 containerd[1982]: time="2025-10-13T04:59:05.815266528Z" level=info msg="runtime interface starting up..." Oct 13 04:59:05.815282 containerd[1982]: time="2025-10-13T04:59:05.815271728Z" level=info msg="starting plugins..." Oct 13 04:59:05.815282 containerd[1982]: time="2025-10-13T04:59:05.815282712Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 13 04:59:05.815584 containerd[1982]: time="2025-10-13T04:59:05.815563056Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 13 04:59:05.815621 containerd[1982]: time="2025-10-13T04:59:05.815609864Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 13 04:59:05.815667 containerd[1982]: time="2025-10-13T04:59:05.815657392Z" level=info msg="containerd successfully booted in 0.351225s" Oct 13 04:59:05.816615 systemd[1]: Started containerd.service - containerd container runtime. Oct 13 04:59:05.823699 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 13 04:59:05.828175 systemd[1]: Startup finished in 2.904s (kernel) + 13.319s (initrd) + 19.252s (userspace) = 35.476s. Oct 13 04:59:06.731805 login[2116]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Oct 13 04:59:06.759176 login[2118]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Oct 13 04:59:06.769118 systemd-logind[1953]: New session 1 of user core. Oct 13 04:59:06.770011 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 13 04:59:06.772553 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 13 04:59:06.805894 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 13 04:59:06.809723 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 13 04:59:06.842536 (systemd)[2171]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 13 04:59:06.844903 systemd-logind[1953]: New session c1 of user core. Oct 13 04:59:06.969233 waagent[2113]: 2025-10-13T04:59:06.969149Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Oct 13 04:59:06.973680 waagent[2113]: 2025-10-13T04:59:06.973632Z INFO Daemon Daemon OS: flatcar 4487.0.0 Oct 13 04:59:06.976938 waagent[2113]: 2025-10-13T04:59:06.976904Z INFO Daemon Daemon Python: 3.11.13 Oct 13 04:59:06.980189 waagent[2113]: 2025-10-13T04:59:06.980150Z INFO Daemon Daemon Run daemon Oct 13 04:59:06.982984 waagent[2113]: 2025-10-13T04:59:06.982917Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4487.0.0' Oct 13 04:59:06.989568 waagent[2113]: 2025-10-13T04:59:06.989520Z INFO Daemon Daemon Using waagent for provisioning Oct 13 04:59:06.993546 waagent[2113]: 2025-10-13T04:59:06.993509Z INFO Daemon Daemon Activate resource disk Oct 13 04:59:06.996760 waagent[2113]: 2025-10-13T04:59:06.996729Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Oct 13 04:59:07.004483 waagent[2113]: 2025-10-13T04:59:07.004434Z INFO Daemon Daemon Found device: None Oct 13 04:59:07.007735 waagent[2113]: 2025-10-13T04:59:07.007699Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Oct 13 04:59:07.013603 waagent[2113]: 2025-10-13T04:59:07.013575Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Oct 13 04:59:07.021875 waagent[2113]: 2025-10-13T04:59:07.021830Z INFO Daemon Daemon Clean protocol and wireserver endpoint Oct 13 04:59:07.025960 waagent[2113]: 2025-10-13T04:59:07.025930Z INFO Daemon Daemon Running default provisioning handler Oct 13 04:59:07.035057 waagent[2113]: 2025-10-13T04:59:07.035012Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Oct 13 04:59:07.044576 waagent[2113]: 2025-10-13T04:59:07.044537Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Oct 13 04:59:07.051749 waagent[2113]: 2025-10-13T04:59:07.051715Z INFO Daemon Daemon cloud-init is enabled: False Oct 13 04:59:07.055598 waagent[2113]: 2025-10-13T04:59:07.055569Z INFO Daemon Daemon Copying ovf-env.xml Oct 13 04:59:07.156203 systemd[2171]: Queued start job for default target default.target. Oct 13 04:59:07.175604 systemd[2171]: Created slice app.slice - User Application Slice. Oct 13 04:59:07.175754 systemd[2171]: Reached target paths.target - Paths. Oct 13 04:59:07.175794 systemd[2171]: Reached target timers.target - Timers. Oct 13 04:59:07.176778 systemd[2171]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 13 04:59:07.183904 waagent[2113]: 2025-10-13T04:59:07.179969Z INFO Daemon Daemon Successfully mounted dvd Oct 13 04:59:07.187181 systemd[2171]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 13 04:59:07.188107 systemd[2171]: Reached target sockets.target - Sockets. Oct 13 04:59:07.188149 systemd[2171]: Reached target basic.target - Basic System. Oct 13 04:59:07.188172 systemd[2171]: Reached target default.target - Main User Target. Oct 13 04:59:07.188192 systemd[2171]: Startup finished in 338ms. Oct 13 04:59:07.188277 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 13 04:59:07.189181 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 13 04:59:07.214093 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Oct 13 04:59:07.217030 waagent[2113]: 2025-10-13T04:59:07.216944Z INFO Daemon Daemon Detect protocol endpoint Oct 13 04:59:07.220426 waagent[2113]: 2025-10-13T04:59:07.220386Z INFO Daemon Daemon Clean protocol and wireserver endpoint Oct 13 04:59:07.224418 waagent[2113]: 2025-10-13T04:59:07.224388Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Oct 13 04:59:07.229167 waagent[2113]: 2025-10-13T04:59:07.229143Z INFO Daemon Daemon Test for route to 168.63.129.16 Oct 13 04:59:07.232968 waagent[2113]: 2025-10-13T04:59:07.232926Z INFO Daemon Daemon Route to 168.63.129.16 exists Oct 13 04:59:07.236516 waagent[2113]: 2025-10-13T04:59:07.236453Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Oct 13 04:59:07.304662 waagent[2113]: 2025-10-13T04:59:07.304613Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Oct 13 04:59:07.309417 waagent[2113]: 2025-10-13T04:59:07.309394Z INFO Daemon Daemon Wire protocol version:2012-11-30 Oct 13 04:59:07.313305 waagent[2113]: 2025-10-13T04:59:07.313209Z INFO Daemon Daemon Server preferred version:2015-04-05 Oct 13 04:59:07.424590 waagent[2113]: 2025-10-13T04:59:07.424502Z INFO Daemon Daemon Initializing goal state during protocol detection Oct 13 04:59:07.430002 waagent[2113]: 2025-10-13T04:59:07.429940Z INFO Daemon Daemon Forcing an update of the goal state. Oct 13 04:59:07.438001 waagent[2113]: 2025-10-13T04:59:07.437966Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Oct 13 04:59:07.454596 waagent[2113]: 2025-10-13T04:59:07.454565Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Oct 13 04:59:07.458754 waagent[2113]: 2025-10-13T04:59:07.458720Z INFO Daemon Oct 13 04:59:07.460746 waagent[2113]: 2025-10-13T04:59:07.460720Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: d216b8bd-70bb-41bd-be01-9b5aa9f56d02 eTag: 6379669601855186752 source: Fabric] Oct 13 04:59:07.469052 waagent[2113]: 2025-10-13T04:59:07.469021Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Oct 13 04:59:07.473724 waagent[2113]: 2025-10-13T04:59:07.473695Z INFO Daemon Oct 13 04:59:07.475800 waagent[2113]: 2025-10-13T04:59:07.475778Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Oct 13 04:59:07.484538 waagent[2113]: 2025-10-13T04:59:07.484513Z INFO Daemon Daemon Downloading artifacts profile blob Oct 13 04:59:07.603780 waagent[2113]: 2025-10-13T04:59:07.603666Z INFO Daemon Downloaded certificate {'thumbprint': '61365BB9CB62C550689AF38E351D2639580BC5AB', 'hasPrivateKey': True} Oct 13 04:59:07.611358 waagent[2113]: 2025-10-13T04:59:07.611314Z INFO Daemon Fetch goal state completed Oct 13 04:59:07.650120 waagent[2113]: 2025-10-13T04:59:07.650080Z INFO Daemon Daemon Starting provisioning Oct 13 04:59:07.654154 waagent[2113]: 2025-10-13T04:59:07.654121Z INFO Daemon Daemon Handle ovf-env.xml. Oct 13 04:59:07.657747 waagent[2113]: 2025-10-13T04:59:07.657723Z INFO Daemon Daemon Set hostname [ci-4487.0.0-a-bf8a300537] Oct 13 04:59:07.689570 waagent[2113]: 2025-10-13T04:59:07.689516Z INFO Daemon Daemon Publish hostname [ci-4487.0.0-a-bf8a300537] Oct 13 04:59:07.694244 waagent[2113]: 2025-10-13T04:59:07.694207Z INFO Daemon Daemon Examine /proc/net/route for primary interface Oct 13 04:59:07.698894 waagent[2113]: 2025-10-13T04:59:07.698861Z INFO Daemon Daemon Primary interface is [eth0] Oct 13 04:59:07.721372 systemd-networkd[1564]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 04:59:07.721380 systemd-networkd[1564]: eth0: Reconfiguring with /usr/lib/systemd/network/zz-default.network. Oct 13 04:59:07.721461 systemd-networkd[1564]: eth0: DHCP lease lost Oct 13 04:59:07.732465 login[2116]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Oct 13 04:59:07.733466 waagent[2113]: 2025-10-13T04:59:07.732794Z INFO Daemon Daemon Create user account if not exists Oct 13 04:59:07.737724 waagent[2113]: 2025-10-13T04:59:07.737673Z INFO Daemon Daemon User core already exists, skip useradd Oct 13 04:59:07.745612 waagent[2113]: 2025-10-13T04:59:07.742171Z INFO Daemon Daemon Configure sudoer Oct 13 04:59:07.745724 systemd-logind[1953]: New session 2 of user core. Oct 13 04:59:07.746738 systemd-networkd[1564]: eth0: DHCPv4 address 10.200.20.16/24, gateway 10.200.20.1 acquired from 168.63.129.16 Oct 13 04:59:07.750158 waagent[2113]: 2025-10-13T04:59:07.750102Z INFO Daemon Daemon Configure sshd Oct 13 04:59:07.753277 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 13 04:59:07.758674 waagent[2113]: 2025-10-13T04:59:07.758606Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Oct 13 04:59:07.767980 waagent[2113]: 2025-10-13T04:59:07.767909Z INFO Daemon Daemon Deploy ssh public key. Oct 13 04:59:08.898738 waagent[2113]: 2025-10-13T04:59:08.898691Z INFO Daemon Daemon Provisioning complete Oct 13 04:59:08.912697 waagent[2113]: 2025-10-13T04:59:08.912659Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Oct 13 04:59:08.917164 waagent[2113]: 2025-10-13T04:59:08.917131Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Oct 13 04:59:08.924217 waagent[2113]: 2025-10-13T04:59:08.924189Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Oct 13 04:59:09.025023 waagent[2219]: 2025-10-13T04:59:09.024952Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Oct 13 04:59:09.026515 waagent[2219]: 2025-10-13T04:59:09.025401Z INFO ExtHandler ExtHandler OS: flatcar 4487.0.0 Oct 13 04:59:09.026515 waagent[2219]: 2025-10-13T04:59:09.025458Z INFO ExtHandler ExtHandler Python: 3.11.13 Oct 13 04:59:09.026515 waagent[2219]: 2025-10-13T04:59:09.025524Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Oct 13 04:59:09.087560 waagent[2219]: 2025-10-13T04:59:09.087496Z INFO ExtHandler ExtHandler Distro: flatcar-4487.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Oct 13 04:59:09.087859 waagent[2219]: 2025-10-13T04:59:09.087829Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Oct 13 04:59:09.088004 waagent[2219]: 2025-10-13T04:59:09.087979Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Oct 13 04:59:09.094350 waagent[2219]: 2025-10-13T04:59:09.094302Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Oct 13 04:59:09.099803 waagent[2219]: 2025-10-13T04:59:09.099770Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Oct 13 04:59:09.100264 waagent[2219]: 2025-10-13T04:59:09.100231Z INFO ExtHandler Oct 13 04:59:09.100409 waagent[2219]: 2025-10-13T04:59:09.100384Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: e34ae52d-9955-4973-b992-2a7e50551293 eTag: 6379669601855186752 source: Fabric] Oct 13 04:59:09.100743 waagent[2219]: 2025-10-13T04:59:09.100712Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Oct 13 04:59:09.101305 waagent[2219]: 2025-10-13T04:59:09.101273Z INFO ExtHandler Oct 13 04:59:09.101440 waagent[2219]: 2025-10-13T04:59:09.101417Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Oct 13 04:59:09.105126 waagent[2219]: 2025-10-13T04:59:09.105097Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Oct 13 04:59:09.203539 waagent[2219]: 2025-10-13T04:59:09.203422Z INFO ExtHandler Downloaded certificate {'thumbprint': '61365BB9CB62C550689AF38E351D2639580BC5AB', 'hasPrivateKey': True} Oct 13 04:59:09.204069 waagent[2219]: 2025-10-13T04:59:09.204035Z INFO ExtHandler Fetch goal state completed Oct 13 04:59:09.216808 waagent[2219]: 2025-10-13T04:59:09.216770Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Oct 13 04:59:09.220508 waagent[2219]: 2025-10-13T04:59:09.220213Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2219 Oct 13 04:59:09.220508 waagent[2219]: 2025-10-13T04:59:09.220332Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Oct 13 04:59:09.220765 waagent[2219]: 2025-10-13T04:59:09.220730Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Oct 13 04:59:09.222009 waagent[2219]: 2025-10-13T04:59:09.221971Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4487.0.0', '', 'Flatcar Container Linux by Kinvolk'] Oct 13 04:59:09.222428 waagent[2219]: 2025-10-13T04:59:09.222393Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4487.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Oct 13 04:59:09.222661 waagent[2219]: 2025-10-13T04:59:09.222629Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Oct 13 04:59:09.223191 waagent[2219]: 2025-10-13T04:59:09.223156Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Oct 13 04:59:09.288522 waagent[2219]: 2025-10-13T04:59:09.288468Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Oct 13 04:59:09.288840 waagent[2219]: 2025-10-13T04:59:09.288808Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Oct 13 04:59:09.293358 waagent[2219]: 2025-10-13T04:59:09.293334Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Oct 13 04:59:09.298060 systemd[1]: Reload requested from client PID 2234 ('systemctl') (unit waagent.service)... Oct 13 04:59:09.298073 systemd[1]: Reloading... Oct 13 04:59:09.365501 zram_generator::config[2274]: No configuration found. Oct 13 04:59:09.522994 systemd[1]: Reloading finished in 224 ms. Oct 13 04:59:09.533947 waagent[2219]: 2025-10-13T04:59:09.533805Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Oct 13 04:59:09.536425 waagent[2219]: 2025-10-13T04:59:09.535821Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Oct 13 04:59:10.450512 waagent[2219]: 2025-10-13T04:59:10.449795Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Oct 13 04:59:10.450512 waagent[2219]: 2025-10-13T04:59:10.450113Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Oct 13 04:59:10.450844 waagent[2219]: 2025-10-13T04:59:10.450710Z INFO ExtHandler ExtHandler Starting env monitor service. Oct 13 04:59:10.450942 waagent[2219]: 2025-10-13T04:59:10.450908Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Oct 13 04:59:10.450983 waagent[2219]: 2025-10-13T04:59:10.450971Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Oct 13 04:59:10.451284 waagent[2219]: 2025-10-13T04:59:10.451246Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Oct 13 04:59:10.451397 waagent[2219]: 2025-10-13T04:59:10.451371Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Oct 13 04:59:10.451629 waagent[2219]: 2025-10-13T04:59:10.451569Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Oct 13 04:59:10.451739 waagent[2219]: 2025-10-13T04:59:10.451702Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Oct 13 04:59:10.451739 waagent[2219]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Oct 13 04:59:10.451739 waagent[2219]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Oct 13 04:59:10.451739 waagent[2219]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Oct 13 04:59:10.451739 waagent[2219]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Oct 13 04:59:10.451739 waagent[2219]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Oct 13 04:59:10.451739 waagent[2219]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Oct 13 04:59:10.451964 waagent[2219]: 2025-10-13T04:59:10.451937Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Oct 13 04:59:10.452327 waagent[2219]: 2025-10-13T04:59:10.452251Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Oct 13 04:59:10.452469 waagent[2219]: 2025-10-13T04:59:10.452320Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Oct 13 04:59:10.452716 waagent[2219]: 2025-10-13T04:59:10.452616Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Oct 13 04:59:10.452716 waagent[2219]: 2025-10-13T04:59:10.452686Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Oct 13 04:59:10.453050 waagent[2219]: 2025-10-13T04:59:10.453026Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Oct 13 04:59:10.454200 waagent[2219]: 2025-10-13T04:59:10.454171Z INFO EnvHandler ExtHandler Configure routes Oct 13 04:59:10.454522 waagent[2219]: 2025-10-13T04:59:10.454341Z INFO EnvHandler ExtHandler Gateway:None Oct 13 04:59:10.454522 waagent[2219]: 2025-10-13T04:59:10.454375Z INFO EnvHandler ExtHandler Routes:None Oct 13 04:59:10.458519 waagent[2219]: 2025-10-13T04:59:10.458467Z INFO ExtHandler ExtHandler Oct 13 04:59:10.458568 waagent[2219]: 2025-10-13T04:59:10.458544Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 5002d86a-74ff-4116-a7e4-ed0c2e69e427 correlation 769701f0-b627-414b-b624-66cdcf787741 created: 2025-10-13T04:57:53.272240Z] Oct 13 04:59:10.458821 waagent[2219]: 2025-10-13T04:59:10.458791Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Oct 13 04:59:10.459201 waagent[2219]: 2025-10-13T04:59:10.459175Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Oct 13 04:59:10.537229 waagent[2219]: 2025-10-13T04:59:10.536740Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Oct 13 04:59:10.537229 waagent[2219]: Try `iptables -h' or 'iptables --help' for more information.) Oct 13 04:59:10.537229 waagent[2219]: 2025-10-13T04:59:10.537152Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 21E4CB0B-3EAA-4A97-828A-1144ACD2682F;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Oct 13 04:59:10.564699 waagent[2219]: 2025-10-13T04:59:10.564640Z INFO MonitorHandler ExtHandler Network interfaces: Oct 13 04:59:10.564699 waagent[2219]: Executing ['ip', '-a', '-o', 'link']: Oct 13 04:59:10.564699 waagent[2219]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Oct 13 04:59:10.564699 waagent[2219]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7b:38:86 brd ff:ff:ff:ff:ff:ff\ altname enx0022487b3886 Oct 13 04:59:10.564699 waagent[2219]: 3: enP64445s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7b:38:86 brd ff:ff:ff:ff:ff:ff\ altname enP64445p0s2 Oct 13 04:59:10.564699 waagent[2219]: Executing ['ip', '-4', '-a', '-o', 'address']: Oct 13 04:59:10.564699 waagent[2219]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Oct 13 04:59:10.564699 waagent[2219]: 2: eth0 inet 10.200.20.16/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Oct 13 04:59:10.564699 waagent[2219]: Executing ['ip', '-6', '-a', '-o', 'address']: Oct 13 04:59:10.564699 waagent[2219]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Oct 13 04:59:10.564699 waagent[2219]: 2: eth0 inet6 fe80::222:48ff:fe7b:3886/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Oct 13 04:59:10.674629 waagent[2219]: 2025-10-13T04:59:10.674071Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Oct 13 04:59:10.674629 waagent[2219]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Oct 13 04:59:10.674629 waagent[2219]: pkts bytes target prot opt in out source destination Oct 13 04:59:10.674629 waagent[2219]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Oct 13 04:59:10.674629 waagent[2219]: pkts bytes target prot opt in out source destination Oct 13 04:59:10.674629 waagent[2219]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Oct 13 04:59:10.674629 waagent[2219]: pkts bytes target prot opt in out source destination Oct 13 04:59:10.674629 waagent[2219]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Oct 13 04:59:10.674629 waagent[2219]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Oct 13 04:59:10.674629 waagent[2219]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Oct 13 04:59:10.676440 waagent[2219]: 2025-10-13T04:59:10.676397Z INFO EnvHandler ExtHandler Current Firewall rules: Oct 13 04:59:10.676440 waagent[2219]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Oct 13 04:59:10.676440 waagent[2219]: pkts bytes target prot opt in out source destination Oct 13 04:59:10.676440 waagent[2219]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Oct 13 04:59:10.676440 waagent[2219]: pkts bytes target prot opt in out source destination Oct 13 04:59:10.676440 waagent[2219]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Oct 13 04:59:10.676440 waagent[2219]: pkts bytes target prot opt in out source destination Oct 13 04:59:10.676440 waagent[2219]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Oct 13 04:59:10.676440 waagent[2219]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Oct 13 04:59:10.676440 waagent[2219]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Oct 13 04:59:10.676637 waagent[2219]: 2025-10-13T04:59:10.676614Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Oct 13 04:59:16.060208 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 13 04:59:16.061661 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 04:59:16.166509 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 04:59:16.169395 (kubelet)[2371]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 04:59:16.245834 kubelet[2371]: E1013 04:59:16.245779 2371 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 04:59:16.248775 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 04:59:16.248886 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 04:59:16.249391 systemd[1]: kubelet.service: Consumed 108ms CPU time, 105.5M memory peak. Oct 13 04:59:26.499284 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 13 04:59:26.500709 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 04:59:26.807075 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 04:59:26.814852 (kubelet)[2385]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 04:59:26.839360 kubelet[2385]: E1013 04:59:26.839301 2385 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 04:59:26.841467 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 04:59:26.841595 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 04:59:26.842079 systemd[1]: kubelet.service: Consumed 103ms CPU time, 106.8M memory peak. Oct 13 04:59:28.245176 chronyd[1931]: Selected source PHC0 Oct 13 04:59:36.986407 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 13 04:59:36.987921 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 04:59:37.319351 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 04:59:37.322036 (kubelet)[2400]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 04:59:37.346598 kubelet[2400]: E1013 04:59:37.346547 2400 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 04:59:37.349596 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 04:59:37.349818 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 04:59:37.350297 systemd[1]: kubelet.service: Consumed 104ms CPU time, 107M memory peak. Oct 13 04:59:39.750369 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 13 04:59:39.752734 systemd[1]: Started sshd@0-10.200.20.16:22-10.200.16.10:40380.service - OpenSSH per-connection server daemon (10.200.16.10:40380). Oct 13 04:59:40.365883 sshd[2407]: Accepted publickey for core from 10.200.16.10 port 40380 ssh2: RSA SHA256:0u0fQSSzne3hQhq8oltmzZFBGtzHLYbFgWa9+RtcBLA Oct 13 04:59:40.366948 sshd-session[2407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 04:59:40.370681 systemd-logind[1953]: New session 3 of user core. Oct 13 04:59:40.378624 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 13 04:59:40.752679 systemd[1]: Started sshd@1-10.200.20.16:22-10.200.16.10:34686.service - OpenSSH per-connection server daemon (10.200.16.10:34686). Oct 13 04:59:41.180425 sshd[2413]: Accepted publickey for core from 10.200.16.10 port 34686 ssh2: RSA SHA256:0u0fQSSzne3hQhq8oltmzZFBGtzHLYbFgWa9+RtcBLA Oct 13 04:59:41.181546 sshd-session[2413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 04:59:41.185189 systemd-logind[1953]: New session 4 of user core. Oct 13 04:59:41.196661 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 13 04:59:41.502518 sshd[2416]: Connection closed by 10.200.16.10 port 34686 Oct 13 04:59:41.503043 sshd-session[2413]: pam_unix(sshd:session): session closed for user core Oct 13 04:59:41.506522 systemd-logind[1953]: Session 4 logged out. Waiting for processes to exit. Oct 13 04:59:41.506845 systemd[1]: sshd@1-10.200.20.16:22-10.200.16.10:34686.service: Deactivated successfully. Oct 13 04:59:41.508157 systemd[1]: session-4.scope: Deactivated successfully. Oct 13 04:59:41.509541 systemd-logind[1953]: Removed session 4. Oct 13 04:59:41.614151 systemd[1]: Started sshd@2-10.200.20.16:22-10.200.16.10:34688.service - OpenSSH per-connection server daemon (10.200.16.10:34688). Oct 13 04:59:42.088278 sshd[2422]: Accepted publickey for core from 10.200.16.10 port 34688 ssh2: RSA SHA256:0u0fQSSzne3hQhq8oltmzZFBGtzHLYbFgWa9+RtcBLA Oct 13 04:59:42.089399 sshd-session[2422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 04:59:42.092890 systemd-logind[1953]: New session 5 of user core. Oct 13 04:59:42.099771 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 13 04:59:42.419074 sshd[2425]: Connection closed by 10.200.16.10 port 34688 Oct 13 04:59:42.418408 sshd-session[2422]: pam_unix(sshd:session): session closed for user core Oct 13 04:59:42.421565 systemd-logind[1953]: Session 5 logged out. Waiting for processes to exit. Oct 13 04:59:42.423004 systemd[1]: sshd@2-10.200.20.16:22-10.200.16.10:34688.service: Deactivated successfully. Oct 13 04:59:42.424723 systemd[1]: session-5.scope: Deactivated successfully. Oct 13 04:59:42.426461 systemd-logind[1953]: Removed session 5. Oct 13 04:59:42.496943 systemd[1]: Started sshd@3-10.200.20.16:22-10.200.16.10:34692.service - OpenSSH per-connection server daemon (10.200.16.10:34692). Oct 13 04:59:42.922850 sshd[2431]: Accepted publickey for core from 10.200.16.10 port 34692 ssh2: RSA SHA256:0u0fQSSzne3hQhq8oltmzZFBGtzHLYbFgWa9+RtcBLA Oct 13 04:59:42.923964 sshd-session[2431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 04:59:42.927562 systemd-logind[1953]: New session 6 of user core. Oct 13 04:59:42.938620 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 13 04:59:43.250587 sshd[2434]: Connection closed by 10.200.16.10 port 34692 Oct 13 04:59:43.251140 sshd-session[2431]: pam_unix(sshd:session): session closed for user core Oct 13 04:59:43.254319 systemd[1]: sshd@3-10.200.20.16:22-10.200.16.10:34692.service: Deactivated successfully. Oct 13 04:59:43.256011 systemd[1]: session-6.scope: Deactivated successfully. Oct 13 04:59:43.256645 systemd-logind[1953]: Session 6 logged out. Waiting for processes to exit. Oct 13 04:59:43.257849 systemd-logind[1953]: Removed session 6. Oct 13 04:59:43.261494 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Oct 13 04:59:43.336891 systemd[1]: Started sshd@4-10.200.20.16:22-10.200.16.10:34700.service - OpenSSH per-connection server daemon (10.200.16.10:34700). Oct 13 04:59:43.801774 sshd[2440]: Accepted publickey for core from 10.200.16.10 port 34700 ssh2: RSA SHA256:0u0fQSSzne3hQhq8oltmzZFBGtzHLYbFgWa9+RtcBLA Oct 13 04:59:43.802870 sshd-session[2440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 04:59:43.806663 systemd-logind[1953]: New session 7 of user core. Oct 13 04:59:43.813698 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 13 04:59:44.272465 sudo[2444]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 13 04:59:44.272735 sudo[2444]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 04:59:44.295830 sudo[2444]: pam_unix(sudo:session): session closed for user root Oct 13 04:59:44.370015 sshd[2443]: Connection closed by 10.200.16.10 port 34700 Oct 13 04:59:44.370704 sshd-session[2440]: pam_unix(sshd:session): session closed for user core Oct 13 04:59:44.374330 systemd[1]: sshd@4-10.200.20.16:22-10.200.16.10:34700.service: Deactivated successfully. Oct 13 04:59:44.375655 systemd[1]: session-7.scope: Deactivated successfully. Oct 13 04:59:44.376207 systemd-logind[1953]: Session 7 logged out. Waiting for processes to exit. Oct 13 04:59:44.377082 systemd-logind[1953]: Removed session 7. Oct 13 04:59:44.447119 systemd[1]: Started sshd@5-10.200.20.16:22-10.200.16.10:34714.service - OpenSSH per-connection server daemon (10.200.16.10:34714). Oct 13 04:59:44.879493 sshd[2450]: Accepted publickey for core from 10.200.16.10 port 34714 ssh2: RSA SHA256:0u0fQSSzne3hQhq8oltmzZFBGtzHLYbFgWa9+RtcBLA Oct 13 04:59:44.881378 sshd-session[2450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 04:59:44.885337 systemd-logind[1953]: New session 8 of user core. Oct 13 04:59:44.893764 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 13 04:59:45.122848 sudo[2455]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 13 04:59:45.123057 sudo[2455]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 04:59:45.158144 sudo[2455]: pam_unix(sudo:session): session closed for user root Oct 13 04:59:45.162915 sudo[2454]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 13 04:59:45.163115 sudo[2454]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 04:59:45.171263 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 04:59:45.203989 augenrules[2477]: No rules Oct 13 04:59:45.205094 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 04:59:45.205407 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 04:59:45.206455 sudo[2454]: pam_unix(sudo:session): session closed for user root Oct 13 04:59:45.280511 sshd[2453]: Connection closed by 10.200.16.10 port 34714 Oct 13 04:59:45.280812 sshd-session[2450]: pam_unix(sshd:session): session closed for user core Oct 13 04:59:45.283926 systemd-logind[1953]: Session 8 logged out. Waiting for processes to exit. Oct 13 04:59:45.284885 systemd[1]: sshd@5-10.200.20.16:22-10.200.16.10:34714.service: Deactivated successfully. Oct 13 04:59:45.286441 systemd[1]: session-8.scope: Deactivated successfully. Oct 13 04:59:45.287088 systemd-logind[1953]: Removed session 8. Oct 13 04:59:45.361118 systemd[1]: Started sshd@6-10.200.20.16:22-10.200.16.10:34718.service - OpenSSH per-connection server daemon (10.200.16.10:34718). Oct 13 04:59:45.793484 sshd[2486]: Accepted publickey for core from 10.200.16.10 port 34718 ssh2: RSA SHA256:0u0fQSSzne3hQhq8oltmzZFBGtzHLYbFgWa9+RtcBLA Oct 13 04:59:45.794531 sshd-session[2486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 04:59:45.798203 systemd-logind[1953]: New session 9 of user core. Oct 13 04:59:45.805599 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 13 04:59:46.038023 sudo[2490]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 13 04:59:46.038575 sudo[2490]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 04:59:47.485886 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 13 04:59:47.487263 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 04:59:47.635933 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 13 04:59:47.643750 (dockerd)[2512]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 13 04:59:47.801292 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 04:59:47.803860 (kubelet)[2518]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 04:59:47.830340 kubelet[2518]: E1013 04:59:47.830291 2518 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 04:59:47.832509 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 04:59:47.832721 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 04:59:47.833334 systemd[1]: kubelet.service: Consumed 103ms CPU time, 106.9M memory peak. Oct 13 04:59:48.906607 dockerd[2512]: time="2025-10-13T04:59:48.906330883Z" level=info msg="Starting up" Oct 13 04:59:48.907648 dockerd[2512]: time="2025-10-13T04:59:48.907626160Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 13 04:59:48.915918 dockerd[2512]: time="2025-10-13T04:59:48.915883439Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 13 04:59:49.047769 dockerd[2512]: time="2025-10-13T04:59:49.047725387Z" level=info msg="Loading containers: start." Oct 13 04:59:49.116516 kernel: Initializing XFRM netlink socket Oct 13 04:59:49.511707 systemd-networkd[1564]: docker0: Link UP Oct 13 04:59:49.523540 dockerd[2512]: time="2025-10-13T04:59:49.523501112Z" level=info msg="Loading containers: done." Oct 13 04:59:49.541381 dockerd[2512]: time="2025-10-13T04:59:49.541341980Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 13 04:59:49.541542 dockerd[2512]: time="2025-10-13T04:59:49.541414823Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 13 04:59:49.541542 dockerd[2512]: time="2025-10-13T04:59:49.541515938Z" level=info msg="Initializing buildkit" Oct 13 04:59:49.586234 dockerd[2512]: time="2025-10-13T04:59:49.586193475Z" level=info msg="Completed buildkit initialization" Oct 13 04:59:49.591429 dockerd[2512]: time="2025-10-13T04:59:49.591385876Z" level=info msg="Daemon has completed initialization" Oct 13 04:59:49.591832 dockerd[2512]: time="2025-10-13T04:59:49.591676699Z" level=info msg="API listen on /run/docker.sock" Oct 13 04:59:49.592186 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 13 04:59:50.297666 containerd[1982]: time="2025-10-13T04:59:50.297378598Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Oct 13 04:59:50.352804 update_engine[1957]: I20251013 04:59:50.352742 1957 update_attempter.cc:509] Updating boot flags... Oct 13 04:59:51.024277 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3608736363.mount: Deactivated successfully. Oct 13 04:59:52.162520 containerd[1982]: time="2025-10-13T04:59:52.162224160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:52.167000 containerd[1982]: time="2025-10-13T04:59:52.166964165Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=27390228" Oct 13 04:59:52.171289 containerd[1982]: time="2025-10-13T04:59:52.171245549Z" level=info msg="ImageCreate event name:\"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:52.176919 containerd[1982]: time="2025-10-13T04:59:52.176863425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:52.177645 containerd[1982]: time="2025-10-13T04:59:52.177436984Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"27386827\" in 1.880016865s" Oct 13 04:59:52.177645 containerd[1982]: time="2025-10-13T04:59:52.177467481Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Oct 13 04:59:52.178828 containerd[1982]: time="2025-10-13T04:59:52.178802012Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Oct 13 04:59:53.832727 containerd[1982]: time="2025-10-13T04:59:53.832674806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:53.835564 containerd[1982]: time="2025-10-13T04:59:53.835540335Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=23547917" Oct 13 04:59:53.838162 containerd[1982]: time="2025-10-13T04:59:53.838140831Z" level=info msg="ImageCreate event name:\"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:53.843551 containerd[1982]: time="2025-10-13T04:59:53.843498813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:53.844199 containerd[1982]: time="2025-10-13T04:59:53.844047589Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"25135832\" in 1.665217345s" Oct 13 04:59:53.844199 containerd[1982]: time="2025-10-13T04:59:53.844079734Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Oct 13 04:59:53.844539 containerd[1982]: time="2025-10-13T04:59:53.844510322Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Oct 13 04:59:55.015505 containerd[1982]: time="2025-10-13T04:59:55.015075467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:55.017460 containerd[1982]: time="2025-10-13T04:59:55.017435085Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=18295977" Oct 13 04:59:55.020149 containerd[1982]: time="2025-10-13T04:59:55.020119625Z" level=info msg="ImageCreate event name:\"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:55.025124 containerd[1982]: time="2025-10-13T04:59:55.025083820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:55.025655 containerd[1982]: time="2025-10-13T04:59:55.025626115Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"19883910\" in 1.181091393s" Oct 13 04:59:55.025749 containerd[1982]: time="2025-10-13T04:59:55.025736262Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Oct 13 04:59:55.026380 containerd[1982]: time="2025-10-13T04:59:55.026357823Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Oct 13 04:59:55.970688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount825486411.mount: Deactivated successfully. Oct 13 04:59:56.260114 containerd[1982]: time="2025-10-13T04:59:56.259656175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:56.263350 containerd[1982]: time="2025-10-13T04:59:56.263292797Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=28240106" Oct 13 04:59:56.266627 containerd[1982]: time="2025-10-13T04:59:56.266601586Z" level=info msg="ImageCreate event name:\"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:56.271054 containerd[1982]: time="2025-10-13T04:59:56.271031198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:56.271483 containerd[1982]: time="2025-10-13T04:59:56.271274533Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"28239125\" in 1.244813219s" Oct 13 04:59:56.271524 containerd[1982]: time="2025-10-13T04:59:56.271501571Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Oct 13 04:59:56.272132 containerd[1982]: time="2025-10-13T04:59:56.272107452Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Oct 13 04:59:57.328417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount124777925.mount: Deactivated successfully. Oct 13 04:59:57.985920 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Oct 13 04:59:57.988536 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 04:59:58.090393 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 04:59:58.093749 (kubelet)[2928]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 04:59:58.193188 kubelet[2928]: E1013 04:59:58.193114 2928 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 04:59:58.195529 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 04:59:58.196017 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 04:59:58.196725 systemd[1]: kubelet.service: Consumed 108ms CPU time, 106.9M memory peak. Oct 13 04:59:58.502417 containerd[1982]: time="2025-10-13T04:59:58.502363660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:58.504891 containerd[1982]: time="2025-10-13T04:59:58.504708053Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Oct 13 04:59:58.508172 containerd[1982]: time="2025-10-13T04:59:58.508146862Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:58.512130 containerd[1982]: time="2025-10-13T04:59:58.512098948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:58.512925 containerd[1982]: time="2025-10-13T04:59:58.512781583Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 2.240645114s" Oct 13 04:59:58.512925 containerd[1982]: time="2025-10-13T04:59:58.512807216Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Oct 13 04:59:58.513609 containerd[1982]: time="2025-10-13T04:59:58.513589006Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 13 04:59:59.044316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3746404915.mount: Deactivated successfully. Oct 13 04:59:59.067521 containerd[1982]: time="2025-10-13T04:59:59.067323327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 04:59:59.071154 containerd[1982]: time="2025-10-13T04:59:59.071123705Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Oct 13 04:59:59.075202 containerd[1982]: time="2025-10-13T04:59:59.074509128Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 04:59:59.078059 containerd[1982]: time="2025-10-13T04:59:59.078033483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 04:59:59.078889 containerd[1982]: time="2025-10-13T04:59:59.078867410Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 565.176529ms" Oct 13 04:59:59.078982 containerd[1982]: time="2025-10-13T04:59:59.078969141Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Oct 13 04:59:59.079544 containerd[1982]: time="2025-10-13T04:59:59.079413330Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Oct 13 04:59:59.629677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount342938191.mount: Deactivated successfully. Oct 13 05:00:02.064293 containerd[1982]: time="2025-10-13T05:00:02.064220995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:00:02.067955 containerd[1982]: time="2025-10-13T05:00:02.067924631Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465857" Oct 13 05:00:02.072071 containerd[1982]: time="2025-10-13T05:00:02.072042441Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:00:02.095096 containerd[1982]: time="2025-10-13T05:00:02.094569886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:00:02.095096 containerd[1982]: time="2025-10-13T05:00:02.094980675Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 3.015275553s" Oct 13 05:00:02.095096 containerd[1982]: time="2025-10-13T05:00:02.095011004Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Oct 13 05:00:04.718908 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:00:04.719025 systemd[1]: kubelet.service: Consumed 108ms CPU time, 106.9M memory peak. Oct 13 05:00:04.721434 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:00:04.743246 systemd[1]: Reload requested from client PID 3021 ('systemctl') (unit session-9.scope)... Oct 13 05:00:04.743259 systemd[1]: Reloading... Oct 13 05:00:04.832510 zram_generator::config[3072]: No configuration found. Oct 13 05:00:04.987628 systemd[1]: Reloading finished in 244 ms. Oct 13 05:00:05.025669 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 13 05:00:05.025727 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 13 05:00:05.026153 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:00:05.026193 systemd[1]: kubelet.service: Consumed 73ms CPU time, 95.1M memory peak. Oct 13 05:00:05.028565 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:00:05.314398 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:00:05.320671 (kubelet)[3137]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 05:00:05.347085 kubelet[3137]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:00:05.347085 kubelet[3137]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 05:00:05.347085 kubelet[3137]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:00:05.347420 kubelet[3137]: I1013 05:00:05.347119 3137 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 05:00:05.690608 kubelet[3137]: I1013 05:00:05.690099 3137 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 13 05:00:05.690608 kubelet[3137]: I1013 05:00:05.690130 3137 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 05:00:05.690608 kubelet[3137]: I1013 05:00:05.690409 3137 server.go:956] "Client rotation is on, will bootstrap in background" Oct 13 05:00:05.705818 kubelet[3137]: E1013 05:00:05.705768 3137 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.16:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 13 05:00:05.706240 kubelet[3137]: I1013 05:00:05.706218 3137 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 05:00:05.712425 kubelet[3137]: I1013 05:00:05.712400 3137 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 05:00:05.714900 kubelet[3137]: I1013 05:00:05.714879 3137 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 13 05:00:05.715947 kubelet[3137]: I1013 05:00:05.715908 3137 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 05:00:05.716071 kubelet[3137]: I1013 05:00:05.715950 3137 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4487.0.0-a-bf8a300537","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 05:00:05.716146 kubelet[3137]: I1013 05:00:05.716082 3137 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 05:00:05.716146 kubelet[3137]: I1013 05:00:05.716091 3137 container_manager_linux.go:303] "Creating device plugin manager" Oct 13 05:00:05.716779 kubelet[3137]: I1013 05:00:05.716760 3137 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:00:05.719150 kubelet[3137]: I1013 05:00:05.719130 3137 kubelet.go:480] "Attempting to sync node with API server" Oct 13 05:00:05.719178 kubelet[3137]: I1013 05:00:05.719159 3137 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 05:00:05.719250 kubelet[3137]: I1013 05:00:05.719238 3137 kubelet.go:386] "Adding apiserver pod source" Oct 13 05:00:05.720389 kubelet[3137]: I1013 05:00:05.720376 3137 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 05:00:05.727544 kubelet[3137]: E1013 05:00:05.726553 3137 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487.0.0-a-bf8a300537&limit=500&resourceVersion=0\": dial tcp 10.200.20.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 13 05:00:05.727544 kubelet[3137]: I1013 05:00:05.726661 3137 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 05:00:05.727544 kubelet[3137]: I1013 05:00:05.727073 3137 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 13 05:00:05.727544 kubelet[3137]: W1013 05:00:05.727121 3137 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 13 05:00:05.729196 kubelet[3137]: I1013 05:00:05.729178 3137 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 13 05:00:05.729365 kubelet[3137]: I1013 05:00:05.729352 3137 server.go:1289] "Started kubelet" Oct 13 05:00:05.730680 kubelet[3137]: E1013 05:00:05.730649 3137 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 13 05:00:05.732864 kubelet[3137]: I1013 05:00:05.732696 3137 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 05:00:05.734534 kubelet[3137]: E1013 05:00:05.733515 3137 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.16:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.16:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4487.0.0-a-bf8a300537.186df444ce634817 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4487.0.0-a-bf8a300537,UID:ci-4487.0.0-a-bf8a300537,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4487.0.0-a-bf8a300537,},FirstTimestamp:2025-10-13 05:00:05.729298455 +0000 UTC m=+0.405399500,LastTimestamp:2025-10-13 05:00:05.729298455 +0000 UTC m=+0.405399500,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4487.0.0-a-bf8a300537,}" Oct 13 05:00:05.735070 kubelet[3137]: I1013 05:00:05.735013 3137 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 05:00:05.735381 kubelet[3137]: I1013 05:00:05.735362 3137 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 13 05:00:05.735627 kubelet[3137]: E1013 05:00:05.735599 3137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487.0.0-a-bf8a300537\" not found" Oct 13 05:00:05.737215 kubelet[3137]: I1013 05:00:05.737175 3137 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 05:00:05.739227 kubelet[3137]: I1013 05:00:05.738793 3137 server.go:317] "Adding debug handlers to kubelet server" Oct 13 05:00:05.740872 kubelet[3137]: I1013 05:00:05.740811 3137 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 05:00:05.741053 kubelet[3137]: I1013 05:00:05.741032 3137 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 05:00:05.742772 kubelet[3137]: E1013 05:00:05.742740 3137 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.0-a-bf8a300537?timeout=10s\": dial tcp 10.200.20.16:6443: connect: connection refused" interval="200ms" Oct 13 05:00:05.742913 kubelet[3137]: I1013 05:00:05.742896 3137 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 13 05:00:05.743084 kubelet[3137]: I1013 05:00:05.743063 3137 factory.go:223] Registration of the systemd container factory successfully Oct 13 05:00:05.743170 kubelet[3137]: I1013 05:00:05.743154 3137 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 05:00:05.744967 kubelet[3137]: E1013 05:00:05.744932 3137 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 13 05:00:05.745321 kubelet[3137]: I1013 05:00:05.745279 3137 reconciler.go:26] "Reconciler: start to sync state" Oct 13 05:00:05.745533 kubelet[3137]: E1013 05:00:05.745507 3137 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 05:00:05.745975 kubelet[3137]: I1013 05:00:05.745898 3137 factory.go:223] Registration of the containerd container factory successfully Oct 13 05:00:05.767839 kubelet[3137]: I1013 05:00:05.767813 3137 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 05:00:05.767839 kubelet[3137]: I1013 05:00:05.767829 3137 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 05:00:05.767839 kubelet[3137]: I1013 05:00:05.767847 3137 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:00:05.784875 kubelet[3137]: I1013 05:00:05.784847 3137 policy_none.go:49] "None policy: Start" Oct 13 05:00:05.784875 kubelet[3137]: I1013 05:00:05.784879 3137 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 13 05:00:05.784875 kubelet[3137]: I1013 05:00:05.784890 3137 state_mem.go:35] "Initializing new in-memory state store" Oct 13 05:00:05.794185 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 13 05:00:05.804150 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 13 05:00:05.807268 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 13 05:00:05.817375 kubelet[3137]: E1013 05:00:05.817351 3137 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 13 05:00:05.817668 kubelet[3137]: I1013 05:00:05.817624 3137 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 05:00:05.817668 kubelet[3137]: I1013 05:00:05.817641 3137 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 05:00:05.818041 kubelet[3137]: I1013 05:00:05.817928 3137 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 05:00:05.819345 kubelet[3137]: E1013 05:00:05.819018 3137 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 05:00:05.819403 kubelet[3137]: E1013 05:00:05.819376 3137 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4487.0.0-a-bf8a300537\" not found" Oct 13 05:00:05.822853 kubelet[3137]: I1013 05:00:05.822819 3137 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 13 05:00:05.823939 kubelet[3137]: I1013 05:00:05.823784 3137 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 13 05:00:05.823939 kubelet[3137]: I1013 05:00:05.823803 3137 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 13 05:00:05.823939 kubelet[3137]: I1013 05:00:05.823819 3137 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 05:00:05.823939 kubelet[3137]: I1013 05:00:05.823824 3137 kubelet.go:2436] "Starting kubelet main sync loop" Oct 13 05:00:05.824033 kubelet[3137]: E1013 05:00:05.823902 3137 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 13 05:00:05.825911 kubelet[3137]: E1013 05:00:05.825890 3137 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 13 05:00:05.919227 kubelet[3137]: I1013 05:00:05.919194 3137 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:05.919591 kubelet[3137]: E1013 05:00:05.919568 3137 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.16:6443/api/v1/nodes\": dial tcp 10.200.20.16:6443: connect: connection refused" node="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:05.937536 systemd[1]: Created slice kubepods-burstable-pode8ef0990b9e4a58c235b178375fd18ca.slice - libcontainer container kubepods-burstable-pode8ef0990b9e4a58c235b178375fd18ca.slice. Oct 13 05:00:05.944072 kubelet[3137]: E1013 05:00:05.943976 3137 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.0-a-bf8a300537?timeout=10s\": dial tcp 10.200.20.16:6443: connect: connection refused" interval="400ms" Oct 13 05:00:05.945155 kubelet[3137]: E1013 05:00:05.945062 3137 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-a-bf8a300537\" not found" node="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:05.946363 kubelet[3137]: I1013 05:00:05.945734 3137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c53a119800c16ef65ea54bd5c3e17348-flexvolume-dir\") pod \"kube-controller-manager-ci-4487.0.0-a-bf8a300537\" (UID: \"c53a119800c16ef65ea54bd5c3e17348\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-a-bf8a300537" Oct 13 05:00:05.946363 kubelet[3137]: I1013 05:00:05.945750 3137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c53a119800c16ef65ea54bd5c3e17348-k8s-certs\") pod \"kube-controller-manager-ci-4487.0.0-a-bf8a300537\" (UID: \"c53a119800c16ef65ea54bd5c3e17348\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-a-bf8a300537" Oct 13 05:00:05.946363 kubelet[3137]: I1013 05:00:05.945769 3137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c53a119800c16ef65ea54bd5c3e17348-kubeconfig\") pod \"kube-controller-manager-ci-4487.0.0-a-bf8a300537\" (UID: \"c53a119800c16ef65ea54bd5c3e17348\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-a-bf8a300537" Oct 13 05:00:05.946363 kubelet[3137]: I1013 05:00:05.945783 3137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c53a119800c16ef65ea54bd5c3e17348-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4487.0.0-a-bf8a300537\" (UID: \"c53a119800c16ef65ea54bd5c3e17348\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-a-bf8a300537" Oct 13 05:00:05.946363 kubelet[3137]: I1013 05:00:05.945795 3137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e8ef0990b9e4a58c235b178375fd18ca-ca-certs\") pod \"kube-apiserver-ci-4487.0.0-a-bf8a300537\" (UID: \"e8ef0990b9e4a58c235b178375fd18ca\") " pod="kube-system/kube-apiserver-ci-4487.0.0-a-bf8a300537" Oct 13 05:00:05.946457 kubelet[3137]: I1013 05:00:05.945804 3137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e8ef0990b9e4a58c235b178375fd18ca-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4487.0.0-a-bf8a300537\" (UID: \"e8ef0990b9e4a58c235b178375fd18ca\") " pod="kube-system/kube-apiserver-ci-4487.0.0-a-bf8a300537" Oct 13 05:00:05.946457 kubelet[3137]: I1013 05:00:05.945813 3137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c53a119800c16ef65ea54bd5c3e17348-ca-certs\") pod \"kube-controller-manager-ci-4487.0.0-a-bf8a300537\" (UID: \"c53a119800c16ef65ea54bd5c3e17348\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-a-bf8a300537" Oct 13 05:00:05.946457 kubelet[3137]: I1013 05:00:05.945822 3137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3463929a8da9d1f032115937306fb81e-kubeconfig\") pod \"kube-scheduler-ci-4487.0.0-a-bf8a300537\" (UID: \"3463929a8da9d1f032115937306fb81e\") " pod="kube-system/kube-scheduler-ci-4487.0.0-a-bf8a300537" Oct 13 05:00:05.946457 kubelet[3137]: I1013 05:00:05.945831 3137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e8ef0990b9e4a58c235b178375fd18ca-k8s-certs\") pod \"kube-apiserver-ci-4487.0.0-a-bf8a300537\" (UID: \"e8ef0990b9e4a58c235b178375fd18ca\") " pod="kube-system/kube-apiserver-ci-4487.0.0-a-bf8a300537" Oct 13 05:00:05.949627 systemd[1]: Created slice kubepods-burstable-podc53a119800c16ef65ea54bd5c3e17348.slice - libcontainer container kubepods-burstable-podc53a119800c16ef65ea54bd5c3e17348.slice. Oct 13 05:00:05.959626 kubelet[3137]: E1013 05:00:05.959497 3137 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-a-bf8a300537\" not found" node="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:05.961393 systemd[1]: Created slice kubepods-burstable-pod3463929a8da9d1f032115937306fb81e.slice - libcontainer container kubepods-burstable-pod3463929a8da9d1f032115937306fb81e.slice. Oct 13 05:00:05.963455 kubelet[3137]: E1013 05:00:05.963427 3137 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-a-bf8a300537\" not found" node="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:06.122024 kubelet[3137]: I1013 05:00:06.121995 3137 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:06.123129 kubelet[3137]: E1013 05:00:06.123100 3137 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.16:6443/api/v1/nodes\": dial tcp 10.200.20.16:6443: connect: connection refused" node="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:06.248068 containerd[1982]: time="2025-10-13T05:00:06.248003036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4487.0.0-a-bf8a300537,Uid:e8ef0990b9e4a58c235b178375fd18ca,Namespace:kube-system,Attempt:0,}" Oct 13 05:00:06.260608 containerd[1982]: time="2025-10-13T05:00:06.260575711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4487.0.0-a-bf8a300537,Uid:c53a119800c16ef65ea54bd5c3e17348,Namespace:kube-system,Attempt:0,}" Oct 13 05:00:06.264954 containerd[1982]: time="2025-10-13T05:00:06.264863918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4487.0.0-a-bf8a300537,Uid:3463929a8da9d1f032115937306fb81e,Namespace:kube-system,Attempt:0,}" Oct 13 05:00:06.345595 kubelet[3137]: E1013 05:00:06.345557 3137 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.0-a-bf8a300537?timeout=10s\": dial tcp 10.200.20.16:6443: connect: connection refused" interval="800ms" Oct 13 05:00:06.355790 containerd[1982]: time="2025-10-13T05:00:06.355750171Z" level=info msg="connecting to shim 46a88abe8a964ddcfc1ccf28102bb88edd151e7d721b0d5280d4f9ee3a3234e3" address="unix:///run/containerd/s/26b1015b7cb1019d986b68afb7488ff80b46ccf7cb58a8c41efb487b3f34b7e9" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:00:06.362360 containerd[1982]: time="2025-10-13T05:00:06.361927037Z" level=info msg="connecting to shim 5c7031853b79e5bda7119ed587621bc56167e1512e8fb54996b3af9bced80b06" address="unix:///run/containerd/s/ded07d17f33a842edb2ec3b420d1570344029bd2ded4d5176c258ab3cf611619" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:00:06.364335 containerd[1982]: time="2025-10-13T05:00:06.364225805Z" level=info msg="connecting to shim 0ff8b91eee1d7c9dda5f4b61cc38c7bfc01d7c919e36abc11dcc2f322e5a08f3" address="unix:///run/containerd/s/d5b7a09fd6fe59d1c93c0257b4e312706c6b69c0e42c78f44857e588eda41674" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:00:06.394623 systemd[1]: Started cri-containerd-0ff8b91eee1d7c9dda5f4b61cc38c7bfc01d7c919e36abc11dcc2f322e5a08f3.scope - libcontainer container 0ff8b91eee1d7c9dda5f4b61cc38c7bfc01d7c919e36abc11dcc2f322e5a08f3. Oct 13 05:00:06.398525 systemd[1]: Started cri-containerd-46a88abe8a964ddcfc1ccf28102bb88edd151e7d721b0d5280d4f9ee3a3234e3.scope - libcontainer container 46a88abe8a964ddcfc1ccf28102bb88edd151e7d721b0d5280d4f9ee3a3234e3. Oct 13 05:00:06.400523 systemd[1]: Started cri-containerd-5c7031853b79e5bda7119ed587621bc56167e1512e8fb54996b3af9bced80b06.scope - libcontainer container 5c7031853b79e5bda7119ed587621bc56167e1512e8fb54996b3af9bced80b06. Oct 13 05:00:06.445768 containerd[1982]: time="2025-10-13T05:00:06.445726875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4487.0.0-a-bf8a300537,Uid:c53a119800c16ef65ea54bd5c3e17348,Namespace:kube-system,Attempt:0,} returns sandbox id \"46a88abe8a964ddcfc1ccf28102bb88edd151e7d721b0d5280d4f9ee3a3234e3\"" Oct 13 05:00:06.459046 containerd[1982]: time="2025-10-13T05:00:06.458927971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4487.0.0-a-bf8a300537,Uid:e8ef0990b9e4a58c235b178375fd18ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c7031853b79e5bda7119ed587621bc56167e1512e8fb54996b3af9bced80b06\"" Oct 13 05:00:06.459336 containerd[1982]: time="2025-10-13T05:00:06.459310775Z" level=info msg="CreateContainer within sandbox \"46a88abe8a964ddcfc1ccf28102bb88edd151e7d721b0d5280d4f9ee3a3234e3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 13 05:00:06.467066 containerd[1982]: time="2025-10-13T05:00:06.467043698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4487.0.0-a-bf8a300537,Uid:3463929a8da9d1f032115937306fb81e,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ff8b91eee1d7c9dda5f4b61cc38c7bfc01d7c919e36abc11dcc2f322e5a08f3\"" Oct 13 05:00:06.469081 containerd[1982]: time="2025-10-13T05:00:06.469050233Z" level=info msg="CreateContainer within sandbox \"5c7031853b79e5bda7119ed587621bc56167e1512e8fb54996b3af9bced80b06\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 13 05:00:06.475213 containerd[1982]: time="2025-10-13T05:00:06.474696427Z" level=info msg="CreateContainer within sandbox \"0ff8b91eee1d7c9dda5f4b61cc38c7bfc01d7c919e36abc11dcc2f322e5a08f3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 13 05:00:06.492247 containerd[1982]: time="2025-10-13T05:00:06.492207266Z" level=info msg="Container f1b88a6be3c707b268ce816da78d9e1c6ba7a5c51dc08c90f88ecfb289bcf499: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:00:06.513036 containerd[1982]: time="2025-10-13T05:00:06.512934487Z" level=info msg="Container 239df2bbf3d82c55c2c557bea9e9e677c30f5ac24f9b4ec435f35354dc96e894: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:00:06.519921 containerd[1982]: time="2025-10-13T05:00:06.519492541Z" level=info msg="Container 9d69c47e9b3e97bf7ee5db3638571ccc8620d1d6e7899d3fe0e358ebb3db99b9: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:00:06.526785 kubelet[3137]: I1013 05:00:06.526692 3137 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:06.527390 kubelet[3137]: E1013 05:00:06.527224 3137 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.16:6443/api/v1/nodes\": dial tcp 10.200.20.16:6443: connect: connection refused" node="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:06.529084 containerd[1982]: time="2025-10-13T05:00:06.529039514Z" level=info msg="CreateContainer within sandbox \"46a88abe8a964ddcfc1ccf28102bb88edd151e7d721b0d5280d4f9ee3a3234e3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f1b88a6be3c707b268ce816da78d9e1c6ba7a5c51dc08c90f88ecfb289bcf499\"" Oct 13 05:00:06.529661 containerd[1982]: time="2025-10-13T05:00:06.529634037Z" level=info msg="StartContainer for \"f1b88a6be3c707b268ce816da78d9e1c6ba7a5c51dc08c90f88ecfb289bcf499\"" Oct 13 05:00:06.531501 containerd[1982]: time="2025-10-13T05:00:06.531161949Z" level=info msg="connecting to shim f1b88a6be3c707b268ce816da78d9e1c6ba7a5c51dc08c90f88ecfb289bcf499" address="unix:///run/containerd/s/26b1015b7cb1019d986b68afb7488ff80b46ccf7cb58a8c41efb487b3f34b7e9" protocol=ttrpc version=3 Oct 13 05:00:06.539380 containerd[1982]: time="2025-10-13T05:00:06.539335374Z" level=info msg="CreateContainer within sandbox \"0ff8b91eee1d7c9dda5f4b61cc38c7bfc01d7c919e36abc11dcc2f322e5a08f3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9d69c47e9b3e97bf7ee5db3638571ccc8620d1d6e7899d3fe0e358ebb3db99b9\"" Oct 13 05:00:06.539935 containerd[1982]: time="2025-10-13T05:00:06.539911640Z" level=info msg="StartContainer for \"9d69c47e9b3e97bf7ee5db3638571ccc8620d1d6e7899d3fe0e358ebb3db99b9\"" Oct 13 05:00:06.540957 containerd[1982]: time="2025-10-13T05:00:06.540922624Z" level=info msg="connecting to shim 9d69c47e9b3e97bf7ee5db3638571ccc8620d1d6e7899d3fe0e358ebb3db99b9" address="unix:///run/containerd/s/d5b7a09fd6fe59d1c93c0257b4e312706c6b69c0e42c78f44857e588eda41674" protocol=ttrpc version=3 Oct 13 05:00:06.544949 kubelet[3137]: E1013 05:00:06.544187 3137 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487.0.0-a-bf8a300537&limit=500&resourceVersion=0\": dial tcp 10.200.20.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 13 05:00:06.549630 systemd[1]: Started cri-containerd-f1b88a6be3c707b268ce816da78d9e1c6ba7a5c51dc08c90f88ecfb289bcf499.scope - libcontainer container f1b88a6be3c707b268ce816da78d9e1c6ba7a5c51dc08c90f88ecfb289bcf499. Oct 13 05:00:06.553600 containerd[1982]: time="2025-10-13T05:00:06.553563926Z" level=info msg="CreateContainer within sandbox \"5c7031853b79e5bda7119ed587621bc56167e1512e8fb54996b3af9bced80b06\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"239df2bbf3d82c55c2c557bea9e9e677c30f5ac24f9b4ec435f35354dc96e894\"" Oct 13 05:00:06.554791 containerd[1982]: time="2025-10-13T05:00:06.554764860Z" level=info msg="StartContainer for \"239df2bbf3d82c55c2c557bea9e9e677c30f5ac24f9b4ec435f35354dc96e894\"" Oct 13 05:00:06.559598 containerd[1982]: time="2025-10-13T05:00:06.559569763Z" level=info msg="connecting to shim 239df2bbf3d82c55c2c557bea9e9e677c30f5ac24f9b4ec435f35354dc96e894" address="unix:///run/containerd/s/ded07d17f33a842edb2ec3b420d1570344029bd2ded4d5176c258ab3cf611619" protocol=ttrpc version=3 Oct 13 05:00:06.572695 systemd[1]: Started cri-containerd-9d69c47e9b3e97bf7ee5db3638571ccc8620d1d6e7899d3fe0e358ebb3db99b9.scope - libcontainer container 9d69c47e9b3e97bf7ee5db3638571ccc8620d1d6e7899d3fe0e358ebb3db99b9. Oct 13 05:00:06.576198 systemd[1]: Started cri-containerd-239df2bbf3d82c55c2c557bea9e9e677c30f5ac24f9b4ec435f35354dc96e894.scope - libcontainer container 239df2bbf3d82c55c2c557bea9e9e677c30f5ac24f9b4ec435f35354dc96e894. Oct 13 05:00:06.664059 containerd[1982]: time="2025-10-13T05:00:06.663925681Z" level=info msg="StartContainer for \"f1b88a6be3c707b268ce816da78d9e1c6ba7a5c51dc08c90f88ecfb289bcf499\" returns successfully" Oct 13 05:00:06.668181 containerd[1982]: time="2025-10-13T05:00:06.668102324Z" level=info msg="StartContainer for \"239df2bbf3d82c55c2c557bea9e9e677c30f5ac24f9b4ec435f35354dc96e894\" returns successfully" Oct 13 05:00:06.670683 containerd[1982]: time="2025-10-13T05:00:06.670664293Z" level=info msg="StartContainer for \"9d69c47e9b3e97bf7ee5db3638571ccc8620d1d6e7899d3fe0e358ebb3db99b9\" returns successfully" Oct 13 05:00:06.832526 kubelet[3137]: E1013 05:00:06.832270 3137 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-a-bf8a300537\" not found" node="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:06.835488 kubelet[3137]: E1013 05:00:06.835373 3137 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-a-bf8a300537\" not found" node="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:06.837891 kubelet[3137]: E1013 05:00:06.837790 3137 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-a-bf8a300537\" not found" node="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:07.329228 kubelet[3137]: I1013 05:00:07.329198 3137 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:07.513324 kubelet[3137]: E1013 05:00:07.513270 3137 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4487.0.0-a-bf8a300537\" not found" node="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:07.530450 kubelet[3137]: I1013 05:00:07.530413 3137 kubelet_node_status.go:78] "Successfully registered node" node="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:07.530450 kubelet[3137]: E1013 05:00:07.530447 3137 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4487.0.0-a-bf8a300537\": node \"ci-4487.0.0-a-bf8a300537\" not found" Oct 13 05:00:07.550531 kubelet[3137]: E1013 05:00:07.550362 3137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487.0.0-a-bf8a300537\" not found" Oct 13 05:00:07.651930 kubelet[3137]: E1013 05:00:07.651820 3137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487.0.0-a-bf8a300537\" not found" Oct 13 05:00:07.752917 kubelet[3137]: E1013 05:00:07.752873 3137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487.0.0-a-bf8a300537\" not found" Oct 13 05:00:07.844721 kubelet[3137]: E1013 05:00:07.844234 3137 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-a-bf8a300537\" not found" node="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:07.844721 kubelet[3137]: E1013 05:00:07.844540 3137 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-a-bf8a300537\" not found" node="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:07.852945 kubelet[3137]: E1013 05:00:07.852923 3137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487.0.0-a-bf8a300537\" not found" Oct 13 05:00:07.953910 kubelet[3137]: E1013 05:00:07.953789 3137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487.0.0-a-bf8a300537\" not found" Oct 13 05:00:08.054350 kubelet[3137]: E1013 05:00:08.054306 3137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487.0.0-a-bf8a300537\" not found" Oct 13 05:00:08.154922 kubelet[3137]: E1013 05:00:08.154880 3137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487.0.0-a-bf8a300537\" not found" Oct 13 05:00:08.255743 kubelet[3137]: E1013 05:00:08.255705 3137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487.0.0-a-bf8a300537\" not found" Oct 13 05:00:08.336512 kubelet[3137]: I1013 05:00:08.336168 3137 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487.0.0-a-bf8a300537" Oct 13 05:00:08.340640 kubelet[3137]: E1013 05:00:08.340501 3137 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4487.0.0-a-bf8a300537\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4487.0.0-a-bf8a300537" Oct 13 05:00:08.340640 kubelet[3137]: I1013 05:00:08.340526 3137 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487.0.0-a-bf8a300537" Oct 13 05:00:08.343037 kubelet[3137]: E1013 05:00:08.341961 3137 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4487.0.0-a-bf8a300537\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4487.0.0-a-bf8a300537" Oct 13 05:00:08.343037 kubelet[3137]: I1013 05:00:08.341983 3137 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.0-a-bf8a300537" Oct 13 05:00:08.343136 kubelet[3137]: E1013 05:00:08.343090 3137 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4487.0.0-a-bf8a300537\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4487.0.0-a-bf8a300537" Oct 13 05:00:08.730515 kubelet[3137]: I1013 05:00:08.729556 3137 apiserver.go:52] "Watching apiserver" Oct 13 05:00:08.743952 kubelet[3137]: I1013 05:00:08.743909 3137 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 13 05:00:08.840228 kubelet[3137]: I1013 05:00:08.840000 3137 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.0-a-bf8a300537" Oct 13 05:00:08.847345 kubelet[3137]: I1013 05:00:08.847320 3137 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 13 05:00:09.826231 systemd[1]: Reload requested from client PID 3418 ('systemctl') (unit session-9.scope)... Oct 13 05:00:09.826256 systemd[1]: Reloading... Oct 13 05:00:09.927531 zram_generator::config[3466]: No configuration found. Oct 13 05:00:10.104009 systemd[1]: Reloading finished in 277 ms. Oct 13 05:00:10.124838 kubelet[3137]: I1013 05:00:10.124786 3137 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 05:00:10.125573 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:00:10.139877 systemd[1]: kubelet.service: Deactivated successfully. Oct 13 05:00:10.140255 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:00:10.140562 systemd[1]: kubelet.service: Consumed 654ms CPU time, 125.1M memory peak. Oct 13 05:00:10.143069 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:00:10.264659 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:00:10.270764 (kubelet)[3530]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 05:00:10.302378 kubelet[3530]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:00:10.302378 kubelet[3530]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 05:00:10.302378 kubelet[3530]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:00:10.302728 kubelet[3530]: I1013 05:00:10.302452 3530 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 05:00:10.308505 kubelet[3530]: I1013 05:00:10.308061 3530 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 13 05:00:10.308505 kubelet[3530]: I1013 05:00:10.308089 3530 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 05:00:10.308505 kubelet[3530]: I1013 05:00:10.308241 3530 server.go:956] "Client rotation is on, will bootstrap in background" Oct 13 05:00:10.309302 kubelet[3530]: I1013 05:00:10.309281 3530 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 13 05:00:10.311146 kubelet[3530]: I1013 05:00:10.311113 3530 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 05:00:10.314719 kubelet[3530]: I1013 05:00:10.314706 3530 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 05:00:10.317251 kubelet[3530]: I1013 05:00:10.317233 3530 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 13 05:00:10.317572 kubelet[3530]: I1013 05:00:10.317546 3530 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 05:00:10.317755 kubelet[3530]: I1013 05:00:10.317641 3530 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4487.0.0-a-bf8a300537","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 05:00:10.317870 kubelet[3530]: I1013 05:00:10.317857 3530 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 05:00:10.317999 kubelet[3530]: I1013 05:00:10.317921 3530 container_manager_linux.go:303] "Creating device plugin manager" Oct 13 05:00:10.317999 kubelet[3530]: I1013 05:00:10.317968 3530 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:00:10.318174 kubelet[3530]: I1013 05:00:10.318162 3530 kubelet.go:480] "Attempting to sync node with API server" Oct 13 05:00:10.318236 kubelet[3530]: I1013 05:00:10.318228 3530 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 05:00:10.318299 kubelet[3530]: I1013 05:00:10.318291 3530 kubelet.go:386] "Adding apiserver pod source" Oct 13 05:00:10.318357 kubelet[3530]: I1013 05:00:10.318349 3530 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 05:00:10.326961 kubelet[3530]: I1013 05:00:10.326673 3530 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 05:00:10.327103 kubelet[3530]: I1013 05:00:10.327084 3530 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 13 05:00:10.331353 kubelet[3530]: I1013 05:00:10.331319 3530 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 13 05:00:10.331422 kubelet[3530]: I1013 05:00:10.331368 3530 server.go:1289] "Started kubelet" Oct 13 05:00:10.332884 kubelet[3530]: I1013 05:00:10.332369 3530 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 05:00:10.334360 kubelet[3530]: I1013 05:00:10.334064 3530 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 05:00:10.334740 kubelet[3530]: I1013 05:00:10.334718 3530 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 05:00:10.335497 kubelet[3530]: I1013 05:00:10.335163 3530 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 05:00:10.336963 kubelet[3530]: I1013 05:00:10.336296 3530 server.go:317] "Adding debug handlers to kubelet server" Oct 13 05:00:10.344900 kubelet[3530]: I1013 05:00:10.344877 3530 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 05:00:10.346351 kubelet[3530]: I1013 05:00:10.346330 3530 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 13 05:00:10.347135 kubelet[3530]: I1013 05:00:10.347119 3530 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 13 05:00:10.347337 kubelet[3530]: I1013 05:00:10.347319 3530 reconciler.go:26] "Reconciler: start to sync state" Oct 13 05:00:10.348706 kubelet[3530]: E1013 05:00:10.348688 3530 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 05:00:10.350424 kubelet[3530]: I1013 05:00:10.350408 3530 factory.go:223] Registration of the containerd container factory successfully Oct 13 05:00:10.350601 kubelet[3530]: I1013 05:00:10.350592 3530 factory.go:223] Registration of the systemd container factory successfully Oct 13 05:00:10.350782 kubelet[3530]: I1013 05:00:10.350765 3530 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 05:00:10.355731 kubelet[3530]: I1013 05:00:10.355129 3530 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 13 05:00:10.356994 kubelet[3530]: I1013 05:00:10.356962 3530 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 13 05:00:10.356994 kubelet[3530]: I1013 05:00:10.356986 3530 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 13 05:00:10.357081 kubelet[3530]: I1013 05:00:10.357003 3530 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 05:00:10.357081 kubelet[3530]: I1013 05:00:10.357008 3530 kubelet.go:2436] "Starting kubelet main sync loop" Oct 13 05:00:10.357081 kubelet[3530]: E1013 05:00:10.357042 3530 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 05:00:10.398125 kubelet[3530]: I1013 05:00:10.398074 3530 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 05:00:10.399002 kubelet[3530]: I1013 05:00:10.398302 3530 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 05:00:10.399002 kubelet[3530]: I1013 05:00:10.398333 3530 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:00:10.399002 kubelet[3530]: I1013 05:00:10.398439 3530 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 13 05:00:10.399002 kubelet[3530]: I1013 05:00:10.398447 3530 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 13 05:00:10.399002 kubelet[3530]: I1013 05:00:10.398460 3530 policy_none.go:49] "None policy: Start" Oct 13 05:00:10.399002 kubelet[3530]: I1013 05:00:10.398468 3530 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 13 05:00:10.399002 kubelet[3530]: I1013 05:00:10.398500 3530 state_mem.go:35] "Initializing new in-memory state store" Oct 13 05:00:10.399002 kubelet[3530]: I1013 05:00:10.398572 3530 state_mem.go:75] "Updated machine memory state" Oct 13 05:00:10.402222 kubelet[3530]: E1013 05:00:10.402199 3530 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 13 05:00:10.402957 kubelet[3530]: I1013 05:00:10.402654 3530 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 05:00:10.402957 kubelet[3530]: I1013 05:00:10.402671 3530 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 05:00:10.403755 kubelet[3530]: I1013 05:00:10.403161 3530 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 05:00:10.403755 kubelet[3530]: E1013 05:00:10.403325 3530 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 05:00:10.458009 kubelet[3530]: I1013 05:00:10.457971 3530 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487.0.0-a-bf8a300537" Oct 13 05:00:10.458347 kubelet[3530]: I1013 05:00:10.457991 3530 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.0-a-bf8a300537" Oct 13 05:00:10.458823 kubelet[3530]: I1013 05:00:10.458038 3530 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487.0.0-a-bf8a300537" Oct 13 05:00:10.466665 kubelet[3530]: I1013 05:00:10.466638 3530 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 13 05:00:10.470138 kubelet[3530]: I1013 05:00:10.469996 3530 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 13 05:00:10.470138 kubelet[3530]: I1013 05:00:10.470037 3530 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 13 05:00:10.470138 kubelet[3530]: E1013 05:00:10.470042 3530 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4487.0.0-a-bf8a300537\" already exists" pod="kube-system/kube-apiserver-ci-4487.0.0-a-bf8a300537" Oct 13 05:00:10.507857 kubelet[3530]: I1013 05:00:10.507823 3530 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:10.520113 kubelet[3530]: I1013 05:00:10.520074 3530 kubelet_node_status.go:124] "Node was previously registered" node="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:10.520212 kubelet[3530]: I1013 05:00:10.520168 3530 kubelet_node_status.go:78] "Successfully registered node" node="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:10.549000 kubelet[3530]: I1013 05:00:10.548958 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e8ef0990b9e4a58c235b178375fd18ca-ca-certs\") pod \"kube-apiserver-ci-4487.0.0-a-bf8a300537\" (UID: \"e8ef0990b9e4a58c235b178375fd18ca\") " pod="kube-system/kube-apiserver-ci-4487.0.0-a-bf8a300537" Oct 13 05:00:10.549000 kubelet[3530]: I1013 05:00:10.548998 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e8ef0990b9e4a58c235b178375fd18ca-k8s-certs\") pod \"kube-apiserver-ci-4487.0.0-a-bf8a300537\" (UID: \"e8ef0990b9e4a58c235b178375fd18ca\") " pod="kube-system/kube-apiserver-ci-4487.0.0-a-bf8a300537" Oct 13 05:00:10.549000 kubelet[3530]: I1013 05:00:10.549010 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c53a119800c16ef65ea54bd5c3e17348-ca-certs\") pod \"kube-controller-manager-ci-4487.0.0-a-bf8a300537\" (UID: \"c53a119800c16ef65ea54bd5c3e17348\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-a-bf8a300537" Oct 13 05:00:10.549153 kubelet[3530]: I1013 05:00:10.549021 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c53a119800c16ef65ea54bd5c3e17348-kubeconfig\") pod \"kube-controller-manager-ci-4487.0.0-a-bf8a300537\" (UID: \"c53a119800c16ef65ea54bd5c3e17348\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-a-bf8a300537" Oct 13 05:00:10.549153 kubelet[3530]: I1013 05:00:10.549040 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3463929a8da9d1f032115937306fb81e-kubeconfig\") pod \"kube-scheduler-ci-4487.0.0-a-bf8a300537\" (UID: \"3463929a8da9d1f032115937306fb81e\") " pod="kube-system/kube-scheduler-ci-4487.0.0-a-bf8a300537" Oct 13 05:00:10.549153 kubelet[3530]: I1013 05:00:10.549051 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e8ef0990b9e4a58c235b178375fd18ca-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4487.0.0-a-bf8a300537\" (UID: \"e8ef0990b9e4a58c235b178375fd18ca\") " pod="kube-system/kube-apiserver-ci-4487.0.0-a-bf8a300537" Oct 13 05:00:10.549153 kubelet[3530]: I1013 05:00:10.549068 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c53a119800c16ef65ea54bd5c3e17348-flexvolume-dir\") pod \"kube-controller-manager-ci-4487.0.0-a-bf8a300537\" (UID: \"c53a119800c16ef65ea54bd5c3e17348\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-a-bf8a300537" Oct 13 05:00:10.549153 kubelet[3530]: I1013 05:00:10.549077 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c53a119800c16ef65ea54bd5c3e17348-k8s-certs\") pod \"kube-controller-manager-ci-4487.0.0-a-bf8a300537\" (UID: \"c53a119800c16ef65ea54bd5c3e17348\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-a-bf8a300537" Oct 13 05:00:10.549232 kubelet[3530]: I1013 05:00:10.549087 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c53a119800c16ef65ea54bd5c3e17348-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4487.0.0-a-bf8a300537\" (UID: \"c53a119800c16ef65ea54bd5c3e17348\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-a-bf8a300537" Oct 13 05:00:11.321727 kubelet[3530]: I1013 05:00:11.321618 3530 apiserver.go:52] "Watching apiserver" Oct 13 05:00:11.347641 kubelet[3530]: I1013 05:00:11.347603 3530 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 13 05:00:11.385192 kubelet[3530]: I1013 05:00:11.385162 3530 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487.0.0-a-bf8a300537" Oct 13 05:00:11.386518 kubelet[3530]: I1013 05:00:11.385844 3530 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.0-a-bf8a300537" Oct 13 05:00:11.395147 kubelet[3530]: I1013 05:00:11.395088 3530 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 13 05:00:11.396787 kubelet[3530]: I1013 05:00:11.396591 3530 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 13 05:00:11.396787 kubelet[3530]: E1013 05:00:11.396620 3530 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4487.0.0-a-bf8a300537\" already exists" pod="kube-system/kube-apiserver-ci-4487.0.0-a-bf8a300537" Oct 13 05:00:11.396889 kubelet[3530]: E1013 05:00:11.396873 3530 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4487.0.0-a-bf8a300537\" already exists" pod="kube-system/kube-controller-manager-ci-4487.0.0-a-bf8a300537" Oct 13 05:00:11.401700 kubelet[3530]: I1013 05:00:11.401636 3530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4487.0.0-a-bf8a300537" podStartSLOduration=3.401625495 podStartE2EDuration="3.401625495s" podCreationTimestamp="2025-10-13 05:00:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:00:11.401449346 +0000 UTC m=+1.127483989" watchObservedRunningTime="2025-10-13 05:00:11.401625495 +0000 UTC m=+1.127660138" Oct 13 05:00:11.411714 kubelet[3530]: I1013 05:00:11.411654 3530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4487.0.0-a-bf8a300537" podStartSLOduration=1.4116456880000001 podStartE2EDuration="1.411645688s" podCreationTimestamp="2025-10-13 05:00:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:00:11.410845943 +0000 UTC m=+1.136880690" watchObservedRunningTime="2025-10-13 05:00:11.411645688 +0000 UTC m=+1.137680331" Oct 13 05:00:11.419879 kubelet[3530]: I1013 05:00:11.419809 3530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4487.0.0-a-bf8a300537" podStartSLOduration=1.4198002459999999 podStartE2EDuration="1.419800246s" podCreationTimestamp="2025-10-13 05:00:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:00:11.41926287 +0000 UTC m=+1.145297513" watchObservedRunningTime="2025-10-13 05:00:11.419800246 +0000 UTC m=+1.145834889" Oct 13 05:00:14.642907 kubelet[3530]: I1013 05:00:14.642749 3530 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 13 05:00:14.643528 containerd[1982]: time="2025-10-13T05:00:14.643427000Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 13 05:00:14.643750 kubelet[3530]: I1013 05:00:14.643661 3530 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 13 05:00:15.168228 systemd[1]: Created slice kubepods-besteffort-pod0db7dd47_2202_41e6_ba20_eb82e50c31cd.slice - libcontainer container kubepods-besteffort-pod0db7dd47_2202_41e6_ba20_eb82e50c31cd.slice. Oct 13 05:00:15.172283 kubelet[3530]: I1013 05:00:15.172255 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0db7dd47-2202-41e6-ba20-eb82e50c31cd-kube-proxy\") pod \"kube-proxy-27cfp\" (UID: \"0db7dd47-2202-41e6-ba20-eb82e50c31cd\") " pod="kube-system/kube-proxy-27cfp" Oct 13 05:00:15.172283 kubelet[3530]: I1013 05:00:15.172285 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0db7dd47-2202-41e6-ba20-eb82e50c31cd-lib-modules\") pod \"kube-proxy-27cfp\" (UID: \"0db7dd47-2202-41e6-ba20-eb82e50c31cd\") " pod="kube-system/kube-proxy-27cfp" Oct 13 05:00:15.172564 kubelet[3530]: I1013 05:00:15.172301 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0db7dd47-2202-41e6-ba20-eb82e50c31cd-xtables-lock\") pod \"kube-proxy-27cfp\" (UID: \"0db7dd47-2202-41e6-ba20-eb82e50c31cd\") " pod="kube-system/kube-proxy-27cfp" Oct 13 05:00:15.172564 kubelet[3530]: I1013 05:00:15.172312 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkmkw\" (UniqueName: \"kubernetes.io/projected/0db7dd47-2202-41e6-ba20-eb82e50c31cd-kube-api-access-bkmkw\") pod \"kube-proxy-27cfp\" (UID: \"0db7dd47-2202-41e6-ba20-eb82e50c31cd\") " pod="kube-system/kube-proxy-27cfp" Oct 13 05:00:15.284182 kubelet[3530]: E1013 05:00:15.283885 3530 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 13 05:00:15.284182 kubelet[3530]: E1013 05:00:15.283920 3530 projected.go:194] Error preparing data for projected volume kube-api-access-bkmkw for pod kube-system/kube-proxy-27cfp: configmap "kube-root-ca.crt" not found Oct 13 05:00:15.284182 kubelet[3530]: E1013 05:00:15.284000 3530 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0db7dd47-2202-41e6-ba20-eb82e50c31cd-kube-api-access-bkmkw podName:0db7dd47-2202-41e6-ba20-eb82e50c31cd nodeName:}" failed. No retries permitted until 2025-10-13 05:00:15.783970198 +0000 UTC m=+5.510004841 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bkmkw" (UniqueName: "kubernetes.io/projected/0db7dd47-2202-41e6-ba20-eb82e50c31cd-kube-api-access-bkmkw") pod "kube-proxy-27cfp" (UID: "0db7dd47-2202-41e6-ba20-eb82e50c31cd") : configmap "kube-root-ca.crt" not found Oct 13 05:00:15.788164 systemd[1]: Created slice kubepods-besteffort-podb7bb48f9_6fea_4bf3_98aa_fe9b856a244c.slice - libcontainer container kubepods-besteffort-podb7bb48f9_6fea_4bf3_98aa_fe9b856a244c.slice. Oct 13 05:00:15.876061 kubelet[3530]: I1013 05:00:15.875891 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwvnj\" (UniqueName: \"kubernetes.io/projected/b7bb48f9-6fea-4bf3-98aa-fe9b856a244c-kube-api-access-qwvnj\") pod \"tigera-operator-755d956888-xbgqp\" (UID: \"b7bb48f9-6fea-4bf3-98aa-fe9b856a244c\") " pod="tigera-operator/tigera-operator-755d956888-xbgqp" Oct 13 05:00:15.876061 kubelet[3530]: I1013 05:00:15.875925 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b7bb48f9-6fea-4bf3-98aa-fe9b856a244c-var-lib-calico\") pod \"tigera-operator-755d956888-xbgqp\" (UID: \"b7bb48f9-6fea-4bf3-98aa-fe9b856a244c\") " pod="tigera-operator/tigera-operator-755d956888-xbgqp" Oct 13 05:00:16.076380 containerd[1982]: time="2025-10-13T05:00:16.076264572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-27cfp,Uid:0db7dd47-2202-41e6-ba20-eb82e50c31cd,Namespace:kube-system,Attempt:0,}" Oct 13 05:00:16.091670 containerd[1982]: time="2025-10-13T05:00:16.091635289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-xbgqp,Uid:b7bb48f9-6fea-4bf3-98aa-fe9b856a244c,Namespace:tigera-operator,Attempt:0,}" Oct 13 05:00:16.161454 containerd[1982]: time="2025-10-13T05:00:16.161271365Z" level=info msg="connecting to shim 083cd0a08b16111a2385d10e9448fad79e85fd7d1c588e4e3446baa3e44a4c7e" address="unix:///run/containerd/s/c5bac749ce619fec3acc942df21b5ea3a3b8106794a38669e95f5bf5410307b6" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:00:16.170951 containerd[1982]: time="2025-10-13T05:00:16.170898586Z" level=info msg="connecting to shim 9beed76eb8bb7914306117221f5b499e7ffff92dd08db6933642875fab209089" address="unix:///run/containerd/s/ba6464aff8b2afacda2303bd5b51d23b256d4cf299ee8b8bdf1f91f39f6e1ffb" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:00:16.184767 systemd[1]: Started cri-containerd-083cd0a08b16111a2385d10e9448fad79e85fd7d1c588e4e3446baa3e44a4c7e.scope - libcontainer container 083cd0a08b16111a2385d10e9448fad79e85fd7d1c588e4e3446baa3e44a4c7e. Oct 13 05:00:16.188077 systemd[1]: Started cri-containerd-9beed76eb8bb7914306117221f5b499e7ffff92dd08db6933642875fab209089.scope - libcontainer container 9beed76eb8bb7914306117221f5b499e7ffff92dd08db6933642875fab209089. Oct 13 05:00:16.217084 containerd[1982]: time="2025-10-13T05:00:16.216790048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-27cfp,Uid:0db7dd47-2202-41e6-ba20-eb82e50c31cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"083cd0a08b16111a2385d10e9448fad79e85fd7d1c588e4e3446baa3e44a4c7e\"" Oct 13 05:00:16.233358 containerd[1982]: time="2025-10-13T05:00:16.233329629Z" level=info msg="CreateContainer within sandbox \"083cd0a08b16111a2385d10e9448fad79e85fd7d1c588e4e3446baa3e44a4c7e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 13 05:00:16.235397 containerd[1982]: time="2025-10-13T05:00:16.235368990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-xbgqp,Uid:b7bb48f9-6fea-4bf3-98aa-fe9b856a244c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9beed76eb8bb7914306117221f5b499e7ffff92dd08db6933642875fab209089\"" Oct 13 05:00:16.237473 containerd[1982]: time="2025-10-13T05:00:16.237351413Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Oct 13 05:00:16.253894 containerd[1982]: time="2025-10-13T05:00:16.253861321Z" level=info msg="Container a38443983a2b06a74e13ab4c2c24f0dbbcab31abab9996b2c7e57a189dc0a4eb: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:00:16.272951 containerd[1982]: time="2025-10-13T05:00:16.272897508Z" level=info msg="CreateContainer within sandbox \"083cd0a08b16111a2385d10e9448fad79e85fd7d1c588e4e3446baa3e44a4c7e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a38443983a2b06a74e13ab4c2c24f0dbbcab31abab9996b2c7e57a189dc0a4eb\"" Oct 13 05:00:16.273925 containerd[1982]: time="2025-10-13T05:00:16.273871111Z" level=info msg="StartContainer for \"a38443983a2b06a74e13ab4c2c24f0dbbcab31abab9996b2c7e57a189dc0a4eb\"" Oct 13 05:00:16.275251 containerd[1982]: time="2025-10-13T05:00:16.275178291Z" level=info msg="connecting to shim a38443983a2b06a74e13ab4c2c24f0dbbcab31abab9996b2c7e57a189dc0a4eb" address="unix:///run/containerd/s/c5bac749ce619fec3acc942df21b5ea3a3b8106794a38669e95f5bf5410307b6" protocol=ttrpc version=3 Oct 13 05:00:16.295600 systemd[1]: Started cri-containerd-a38443983a2b06a74e13ab4c2c24f0dbbcab31abab9996b2c7e57a189dc0a4eb.scope - libcontainer container a38443983a2b06a74e13ab4c2c24f0dbbcab31abab9996b2c7e57a189dc0a4eb. Oct 13 05:00:16.328167 containerd[1982]: time="2025-10-13T05:00:16.327141228Z" level=info msg="StartContainer for \"a38443983a2b06a74e13ab4c2c24f0dbbcab31abab9996b2c7e57a189dc0a4eb\" returns successfully" Oct 13 05:00:17.281381 kubelet[3530]: I1013 05:00:17.281326 3530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-27cfp" podStartSLOduration=2.281310955 podStartE2EDuration="2.281310955s" podCreationTimestamp="2025-10-13 05:00:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:00:16.410309132 +0000 UTC m=+6.136343775" watchObservedRunningTime="2025-10-13 05:00:17.281310955 +0000 UTC m=+7.007345598" Oct 13 05:00:18.220700 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1071977029.mount: Deactivated successfully. Oct 13 05:00:18.948655 containerd[1982]: time="2025-10-13T05:00:18.948597741Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:00:18.953554 containerd[1982]: time="2025-10-13T05:00:18.953386051Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Oct 13 05:00:18.956501 containerd[1982]: time="2025-10-13T05:00:18.956473481Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:00:18.961295 containerd[1982]: time="2025-10-13T05:00:18.961268831Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:00:18.961964 containerd[1982]: time="2025-10-13T05:00:18.961605640Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 2.72420969s" Oct 13 05:00:18.961964 containerd[1982]: time="2025-10-13T05:00:18.961632105Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Oct 13 05:00:18.969528 containerd[1982]: time="2025-10-13T05:00:18.969505364Z" level=info msg="CreateContainer within sandbox \"9beed76eb8bb7914306117221f5b499e7ffff92dd08db6933642875fab209089\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 13 05:00:18.987857 containerd[1982]: time="2025-10-13T05:00:18.987624494Z" level=info msg="Container b1fab5efe80a29619d39fadc432798ae6ac18c5e567ee37b37e0c7a794d0eaac: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:00:19.005498 containerd[1982]: time="2025-10-13T05:00:19.005416134Z" level=info msg="CreateContainer within sandbox \"9beed76eb8bb7914306117221f5b499e7ffff92dd08db6933642875fab209089\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b1fab5efe80a29619d39fadc432798ae6ac18c5e567ee37b37e0c7a794d0eaac\"" Oct 13 05:00:19.006252 containerd[1982]: time="2025-10-13T05:00:19.006044960Z" level=info msg="StartContainer for \"b1fab5efe80a29619d39fadc432798ae6ac18c5e567ee37b37e0c7a794d0eaac\"" Oct 13 05:00:19.007175 containerd[1982]: time="2025-10-13T05:00:19.007153959Z" level=info msg="connecting to shim b1fab5efe80a29619d39fadc432798ae6ac18c5e567ee37b37e0c7a794d0eaac" address="unix:///run/containerd/s/ba6464aff8b2afacda2303bd5b51d23b256d4cf299ee8b8bdf1f91f39f6e1ffb" protocol=ttrpc version=3 Oct 13 05:00:19.028702 systemd[1]: Started cri-containerd-b1fab5efe80a29619d39fadc432798ae6ac18c5e567ee37b37e0c7a794d0eaac.scope - libcontainer container b1fab5efe80a29619d39fadc432798ae6ac18c5e567ee37b37e0c7a794d0eaac. Oct 13 05:00:19.057780 containerd[1982]: time="2025-10-13T05:00:19.057732394Z" level=info msg="StartContainer for \"b1fab5efe80a29619d39fadc432798ae6ac18c5e567ee37b37e0c7a794d0eaac\" returns successfully" Oct 13 05:00:19.417611 kubelet[3530]: I1013 05:00:19.417546 3530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-xbgqp" podStartSLOduration=1.691677434 podStartE2EDuration="4.417529121s" podCreationTimestamp="2025-10-13 05:00:15 +0000 UTC" firstStartedPulling="2025-10-13 05:00:16.236300456 +0000 UTC m=+5.962335099" lastFinishedPulling="2025-10-13 05:00:18.962152143 +0000 UTC m=+8.688186786" observedRunningTime="2025-10-13 05:00:19.416793805 +0000 UTC m=+9.142828472" watchObservedRunningTime="2025-10-13 05:00:19.417529121 +0000 UTC m=+9.143563764" Oct 13 05:00:24.421314 sudo[2490]: pam_unix(sudo:session): session closed for user root Oct 13 05:00:24.496511 sshd[2489]: Connection closed by 10.200.16.10 port 34718 Oct 13 05:00:24.496404 sshd-session[2486]: pam_unix(sshd:session): session closed for user core Oct 13 05:00:24.503021 systemd[1]: sshd@6-10.200.20.16:22-10.200.16.10:34718.service: Deactivated successfully. Oct 13 05:00:24.503381 systemd-logind[1953]: Session 9 logged out. Waiting for processes to exit. Oct 13 05:00:24.507270 systemd[1]: session-9.scope: Deactivated successfully. Oct 13 05:00:24.507596 systemd[1]: session-9.scope: Consumed 3.305s CPU time, 224.4M memory peak. Oct 13 05:00:24.511183 systemd-logind[1953]: Removed session 9. Oct 13 05:00:28.664368 systemd[1]: Created slice kubepods-besteffort-podb44d1b3c_cdb3_4eab_8101_3bfbc6396bb8.slice - libcontainer container kubepods-besteffort-podb44d1b3c_cdb3_4eab_8101_3bfbc6396bb8.slice. Oct 13 05:00:28.760593 kubelet[3530]: I1013 05:00:28.760552 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf6ks\" (UniqueName: \"kubernetes.io/projected/b44d1b3c-cdb3-4eab-8101-3bfbc6396bb8-kube-api-access-vf6ks\") pod \"calico-typha-cdf5d7b88-gnrsr\" (UID: \"b44d1b3c-cdb3-4eab-8101-3bfbc6396bb8\") " pod="calico-system/calico-typha-cdf5d7b88-gnrsr" Oct 13 05:00:28.760593 kubelet[3530]: I1013 05:00:28.760586 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b44d1b3c-cdb3-4eab-8101-3bfbc6396bb8-tigera-ca-bundle\") pod \"calico-typha-cdf5d7b88-gnrsr\" (UID: \"b44d1b3c-cdb3-4eab-8101-3bfbc6396bb8\") " pod="calico-system/calico-typha-cdf5d7b88-gnrsr" Oct 13 05:00:28.760593 kubelet[3530]: I1013 05:00:28.760598 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b44d1b3c-cdb3-4eab-8101-3bfbc6396bb8-typha-certs\") pod \"calico-typha-cdf5d7b88-gnrsr\" (UID: \"b44d1b3c-cdb3-4eab-8101-3bfbc6396bb8\") " pod="calico-system/calico-typha-cdf5d7b88-gnrsr" Oct 13 05:00:28.859847 systemd[1]: Created slice kubepods-besteffort-pod948c99fd_67ca_42ee_a47b_93aa0f42c06d.slice - libcontainer container kubepods-besteffort-pod948c99fd_67ca_42ee_a47b_93aa0f42c06d.slice. Oct 13 05:00:28.961569 kubelet[3530]: I1013 05:00:28.961436 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/948c99fd-67ca-42ee-a47b-93aa0f42c06d-xtables-lock\") pod \"calico-node-5gpdg\" (UID: \"948c99fd-67ca-42ee-a47b-93aa0f42c06d\") " pod="calico-system/calico-node-5gpdg" Oct 13 05:00:28.962189 kubelet[3530]: I1013 05:00:28.961971 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/948c99fd-67ca-42ee-a47b-93aa0f42c06d-cni-net-dir\") pod \"calico-node-5gpdg\" (UID: \"948c99fd-67ca-42ee-a47b-93aa0f42c06d\") " pod="calico-system/calico-node-5gpdg" Oct 13 05:00:28.962189 kubelet[3530]: I1013 05:00:28.961998 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md9dg\" (UniqueName: \"kubernetes.io/projected/948c99fd-67ca-42ee-a47b-93aa0f42c06d-kube-api-access-md9dg\") pod \"calico-node-5gpdg\" (UID: \"948c99fd-67ca-42ee-a47b-93aa0f42c06d\") " pod="calico-system/calico-node-5gpdg" Oct 13 05:00:28.962189 kubelet[3530]: I1013 05:00:28.962024 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/948c99fd-67ca-42ee-a47b-93aa0f42c06d-flexvol-driver-host\") pod \"calico-node-5gpdg\" (UID: \"948c99fd-67ca-42ee-a47b-93aa0f42c06d\") " pod="calico-system/calico-node-5gpdg" Oct 13 05:00:28.962189 kubelet[3530]: I1013 05:00:28.962034 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/948c99fd-67ca-42ee-a47b-93aa0f42c06d-node-certs\") pod \"calico-node-5gpdg\" (UID: \"948c99fd-67ca-42ee-a47b-93aa0f42c06d\") " pod="calico-system/calico-node-5gpdg" Oct 13 05:00:28.962189 kubelet[3530]: I1013 05:00:28.962046 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/948c99fd-67ca-42ee-a47b-93aa0f42c06d-var-lib-calico\") pod \"calico-node-5gpdg\" (UID: \"948c99fd-67ca-42ee-a47b-93aa0f42c06d\") " pod="calico-system/calico-node-5gpdg" Oct 13 05:00:28.962565 kubelet[3530]: I1013 05:00:28.962058 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/948c99fd-67ca-42ee-a47b-93aa0f42c06d-cni-log-dir\") pod \"calico-node-5gpdg\" (UID: \"948c99fd-67ca-42ee-a47b-93aa0f42c06d\") " pod="calico-system/calico-node-5gpdg" Oct 13 05:00:28.962565 kubelet[3530]: I1013 05:00:28.962069 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/948c99fd-67ca-42ee-a47b-93aa0f42c06d-cni-bin-dir\") pod \"calico-node-5gpdg\" (UID: \"948c99fd-67ca-42ee-a47b-93aa0f42c06d\") " pod="calico-system/calico-node-5gpdg" Oct 13 05:00:28.962565 kubelet[3530]: I1013 05:00:28.962447 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/948c99fd-67ca-42ee-a47b-93aa0f42c06d-lib-modules\") pod \"calico-node-5gpdg\" (UID: \"948c99fd-67ca-42ee-a47b-93aa0f42c06d\") " pod="calico-system/calico-node-5gpdg" Oct 13 05:00:28.962809 kubelet[3530]: I1013 05:00:28.962464 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/948c99fd-67ca-42ee-a47b-93aa0f42c06d-var-run-calico\") pod \"calico-node-5gpdg\" (UID: \"948c99fd-67ca-42ee-a47b-93aa0f42c06d\") " pod="calico-system/calico-node-5gpdg" Oct 13 05:00:28.962809 kubelet[3530]: I1013 05:00:28.962748 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/948c99fd-67ca-42ee-a47b-93aa0f42c06d-tigera-ca-bundle\") pod \"calico-node-5gpdg\" (UID: \"948c99fd-67ca-42ee-a47b-93aa0f42c06d\") " pod="calico-system/calico-node-5gpdg" Oct 13 05:00:28.962809 kubelet[3530]: I1013 05:00:28.962761 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/948c99fd-67ca-42ee-a47b-93aa0f42c06d-policysync\") pod \"calico-node-5gpdg\" (UID: \"948c99fd-67ca-42ee-a47b-93aa0f42c06d\") " pod="calico-system/calico-node-5gpdg" Oct 13 05:00:28.968288 containerd[1982]: time="2025-10-13T05:00:28.968169450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cdf5d7b88-gnrsr,Uid:b44d1b3c-cdb3-4eab-8101-3bfbc6396bb8,Namespace:calico-system,Attempt:0,}" Oct 13 05:00:29.015979 containerd[1982]: time="2025-10-13T05:00:29.015915339Z" level=info msg="connecting to shim cf8a3d85b2ef10d0044bb44cbfb68c397776a4f839f05c9dbd368530b36940fd" address="unix:///run/containerd/s/ad24d7e32d42e12d417420002ca26f7ade660d31675f93e3e00f6804e809db75" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:00:29.017025 kubelet[3530]: E1013 05:00:29.016977 3530 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2fn9d" podUID="cc60dacc-3ca6-4b24-9872-16cd8a93e18d" Oct 13 05:00:29.046754 systemd[1]: Started cri-containerd-cf8a3d85b2ef10d0044bb44cbfb68c397776a4f839f05c9dbd368530b36940fd.scope - libcontainer container cf8a3d85b2ef10d0044bb44cbfb68c397776a4f839f05c9dbd368530b36940fd. Oct 13 05:00:29.064503 kubelet[3530]: I1013 05:00:29.063903 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/cc60dacc-3ca6-4b24-9872-16cd8a93e18d-varrun\") pod \"csi-node-driver-2fn9d\" (UID: \"cc60dacc-3ca6-4b24-9872-16cd8a93e18d\") " pod="calico-system/csi-node-driver-2fn9d" Oct 13 05:00:29.065385 kubelet[3530]: I1013 05:00:29.064636 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kppc\" (UniqueName: \"kubernetes.io/projected/cc60dacc-3ca6-4b24-9872-16cd8a93e18d-kube-api-access-5kppc\") pod \"csi-node-driver-2fn9d\" (UID: \"cc60dacc-3ca6-4b24-9872-16cd8a93e18d\") " pod="calico-system/csi-node-driver-2fn9d" Oct 13 05:00:29.065385 kubelet[3530]: I1013 05:00:29.064689 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cc60dacc-3ca6-4b24-9872-16cd8a93e18d-kubelet-dir\") pod \"csi-node-driver-2fn9d\" (UID: \"cc60dacc-3ca6-4b24-9872-16cd8a93e18d\") " pod="calico-system/csi-node-driver-2fn9d" Oct 13 05:00:29.065385 kubelet[3530]: I1013 05:00:29.064702 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cc60dacc-3ca6-4b24-9872-16cd8a93e18d-registration-dir\") pod \"csi-node-driver-2fn9d\" (UID: \"cc60dacc-3ca6-4b24-9872-16cd8a93e18d\") " pod="calico-system/csi-node-driver-2fn9d" Oct 13 05:00:29.065385 kubelet[3530]: I1013 05:00:29.064729 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cc60dacc-3ca6-4b24-9872-16cd8a93e18d-socket-dir\") pod \"csi-node-driver-2fn9d\" (UID: \"cc60dacc-3ca6-4b24-9872-16cd8a93e18d\") " pod="calico-system/csi-node-driver-2fn9d" Oct 13 05:00:29.071012 kubelet[3530]: E1013 05:00:29.070996 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:29.071107 kubelet[3530]: W1013 05:00:29.071092 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:29.071488 kubelet[3530]: E1013 05:00:29.071461 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:29.092493 kubelet[3530]: E1013 05:00:29.092400 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:29.092718 kubelet[3530]: W1013 05:00:29.092699 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:29.092894 kubelet[3530]: E1013 05:00:29.092874 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:29.100130 containerd[1982]: time="2025-10-13T05:00:29.099796376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cdf5d7b88-gnrsr,Uid:b44d1b3c-cdb3-4eab-8101-3bfbc6396bb8,Namespace:calico-system,Attempt:0,} returns sandbox id \"cf8a3d85b2ef10d0044bb44cbfb68c397776a4f839f05c9dbd368530b36940fd\"" Oct 13 05:00:29.103712 containerd[1982]: time="2025-10-13T05:00:29.103673776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Oct 13 05:00:29.163778 containerd[1982]: time="2025-10-13T05:00:29.163736930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5gpdg,Uid:948c99fd-67ca-42ee-a47b-93aa0f42c06d,Namespace:calico-system,Attempt:0,}" Oct 13 05:00:29.166240 kubelet[3530]: E1013 05:00:29.166168 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:29.166240 kubelet[3530]: W1013 05:00:29.166200 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:29.166240 kubelet[3530]: E1013 05:00:29.166221 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:29.166700 kubelet[3530]: E1013 05:00:29.166658 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:29.166700 kubelet[3530]: W1013 05:00:29.166674 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:29.166700 kubelet[3530]: E1013 05:00:29.166686 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:29.167056 kubelet[3530]: E1013 05:00:29.167011 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:29.167056 kubelet[3530]: W1013 05:00:29.167035 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:29.167056 kubelet[3530]: E1013 05:00:29.167045 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:29.167452 kubelet[3530]: E1013 05:00:29.167419 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:29.167452 kubelet[3530]: W1013 05:00:29.167430 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:29.167452 kubelet[3530]: E1013 05:00:29.167440 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:29.168151 kubelet[3530]: E1013 05:00:29.168032 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:29.168151 kubelet[3530]: W1013 05:00:29.168050 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:29.168151 kubelet[3530]: E1013 05:00:29.168060 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:29.168386 kubelet[3530]: E1013 05:00:29.168358 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:29.168386 kubelet[3530]: W1013 05:00:29.168369 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:29.168386 kubelet[3530]: E1013 05:00:29.168377 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:29.168728 kubelet[3530]: E1013 05:00:29.168695 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:29.168728 kubelet[3530]: W1013 05:00:29.168707 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:29.168728 kubelet[3530]: E1013 05:00:29.168717 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:29.168996 kubelet[3530]: E1013 05:00:29.168985 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:29.169080 kubelet[3530]: W1013 05:00:29.169054 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:29.169080 kubelet[3530]: E1013 05:00:29.169068 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:29.169342 kubelet[3530]: E1013 05:00:29.169310 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:29.169342 kubelet[3530]: W1013 05:00:29.169322 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:29.169342 kubelet[3530]: E1013 05:00:29.169331 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:29.170352 kubelet[3530]: E1013 05:00:29.170315 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:29.170352 kubelet[3530]: W1013 05:00:29.170330 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:29.170352 kubelet[3530]: E1013 05:00:29.170340 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:29.172931 kubelet[3530]: E1013 05:00:29.170640 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:29.172931 kubelet[3530]: W1013 05:00:29.170653 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:29.172931 kubelet[3530]: E1013 05:00:29.170663 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:29.172931 kubelet[3530]: E1013 05:00:29.170872 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:29.172931 kubelet[3530]: W1013 05:00:29.170882 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:29.172931 kubelet[3530]: E1013 05:00:29.170891 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:29.172931 kubelet[3530]: E1013 05:00:29.171176 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:29.172931 kubelet[3530]: W1013 05:00:29.171186 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:29.172931 kubelet[3530]: E1013 05:00:29.171196 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:29.172931 kubelet[3530]: E1013 05:00:29.171700 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:29.173117 kubelet[3530]: W1013 05:00:29.171712 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:29.173117 kubelet[3530]: E1013 05:00:29.171723 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:29.173117 kubelet[3530]: E1013 05:00:29.172111 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:29.173117 kubelet[3530]: W1013 05:00:29.172123 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:29.173117 kubelet[3530]: E1013 05:00:29.172133 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:29.173117 kubelet[3530]: E1013 05:00:29.172522 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:29.173117 kubelet[3530]: W1013 05:00:29.172533 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:29.173117 kubelet[3530]: E1013 05:00:29.172551 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:29.173836 kubelet[3530]: E1013 05:00:29.173623 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:29.173836 kubelet[3530]: W1013 05:00:29.173635 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:29.173836 kubelet[3530]: E1013 05:00:29.173645 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:29.173994 kubelet[3530]: E1013 05:00:29.173984 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:29.174257 kubelet[3530]: W1013 05:00:29.174071 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:29.174257 kubelet[3530]: E1013 05:00:29.174086 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:29.174904 kubelet[3530]: E1013 05:00:29.174890 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:29.175157 kubelet[3530]: W1013 05:00:29.175082 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:29.175157 kubelet[3530]: E1013 05:00:29.175118 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:29.175788 kubelet[3530]: E1013 05:00:29.175763 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:29.177079 kubelet[3530]: W1013 05:00:29.176789 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:29.177079 kubelet[3530]: E1013 05:00:29.176814 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:29.177936 kubelet[3530]: E1013 05:00:29.177909 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:29.178039 kubelet[3530]: W1013 05:00:29.178017 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:29.178088 kubelet[3530]: E1013 05:00:29.178040 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:29.178756 kubelet[3530]: E1013 05:00:29.178389 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:29.178756 kubelet[3530]: W1013 05:00:29.178402 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:29.178756 kubelet[3530]: E1013 05:00:29.178412 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:29.178997 kubelet[3530]: E1013 05:00:29.178977 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:29.178997 kubelet[3530]: W1013 05:00:29.178992 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:29.178997 kubelet[3530]: E1013 05:00:29.179003 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:29.179423 kubelet[3530]: E1013 05:00:29.179405 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:29.179423 kubelet[3530]: W1013 05:00:29.179418 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:29.179423 kubelet[3530]: E1013 05:00:29.179429 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:29.180087 kubelet[3530]: E1013 05:00:29.180068 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:29.180087 kubelet[3530]: W1013 05:00:29.180088 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:29.180166 kubelet[3530]: E1013 05:00:29.180099 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:29.190496 kubelet[3530]: E1013 05:00:29.190317 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:29.190496 kubelet[3530]: W1013 05:00:29.190332 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:29.190496 kubelet[3530]: E1013 05:00:29.190345 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:29.210778 containerd[1982]: time="2025-10-13T05:00:29.210703637Z" level=info msg="connecting to shim 09efbb7023df065b8e6515c66ba36afb0da288060ac49e4a4c0fa55900b32a80" address="unix:///run/containerd/s/48479be5b77eb14c2bf1d5cfa984fdcf9d68a051b911e987703dfadebe2ce212" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:00:29.233687 systemd[1]: Started cri-containerd-09efbb7023df065b8e6515c66ba36afb0da288060ac49e4a4c0fa55900b32a80.scope - libcontainer container 09efbb7023df065b8e6515c66ba36afb0da288060ac49e4a4c0fa55900b32a80. Oct 13 05:00:29.265776 containerd[1982]: time="2025-10-13T05:00:29.265713518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5gpdg,Uid:948c99fd-67ca-42ee-a47b-93aa0f42c06d,Namespace:calico-system,Attempt:0,} returns sandbox id \"09efbb7023df065b8e6515c66ba36afb0da288060ac49e4a4c0fa55900b32a80\"" Oct 13 05:00:30.857456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1379248265.mount: Deactivated successfully. Oct 13 05:00:31.357955 kubelet[3530]: E1013 05:00:31.357908 3530 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2fn9d" podUID="cc60dacc-3ca6-4b24-9872-16cd8a93e18d" Oct 13 05:00:31.872886 containerd[1982]: time="2025-10-13T05:00:31.872846461Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:00:31.876320 containerd[1982]: time="2025-10-13T05:00:31.876292168Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Oct 13 05:00:31.879155 containerd[1982]: time="2025-10-13T05:00:31.879118257Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:00:31.884483 containerd[1982]: time="2025-10-13T05:00:31.884433201Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:00:31.885341 containerd[1982]: time="2025-10-13T05:00:31.885251817Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 2.781388907s" Oct 13 05:00:31.885341 containerd[1982]: time="2025-10-13T05:00:31.885277481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Oct 13 05:00:31.886441 containerd[1982]: time="2025-10-13T05:00:31.886421946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Oct 13 05:00:31.902539 containerd[1982]: time="2025-10-13T05:00:31.902460134Z" level=info msg="CreateContainer within sandbox \"cf8a3d85b2ef10d0044bb44cbfb68c397776a4f839f05c9dbd368530b36940fd\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 13 05:00:31.923646 containerd[1982]: time="2025-10-13T05:00:31.923464864Z" level=info msg="Container 88c1e2588225fed7117313cc257d34304ec1a9dadfb6d6cd64f300f1824bce26: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:00:31.940853 containerd[1982]: time="2025-10-13T05:00:31.940817042Z" level=info msg="CreateContainer within sandbox \"cf8a3d85b2ef10d0044bb44cbfb68c397776a4f839f05c9dbd368530b36940fd\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"88c1e2588225fed7117313cc257d34304ec1a9dadfb6d6cd64f300f1824bce26\"" Oct 13 05:00:31.941340 containerd[1982]: time="2025-10-13T05:00:31.941314128Z" level=info msg="StartContainer for \"88c1e2588225fed7117313cc257d34304ec1a9dadfb6d6cd64f300f1824bce26\"" Oct 13 05:00:31.944666 containerd[1982]: time="2025-10-13T05:00:31.944634087Z" level=info msg="connecting to shim 88c1e2588225fed7117313cc257d34304ec1a9dadfb6d6cd64f300f1824bce26" address="unix:///run/containerd/s/ad24d7e32d42e12d417420002ca26f7ade660d31675f93e3e00f6804e809db75" protocol=ttrpc version=3 Oct 13 05:00:31.969656 systemd[1]: Started cri-containerd-88c1e2588225fed7117313cc257d34304ec1a9dadfb6d6cd64f300f1824bce26.scope - libcontainer container 88c1e2588225fed7117313cc257d34304ec1a9dadfb6d6cd64f300f1824bce26. Oct 13 05:00:32.008133 containerd[1982]: time="2025-10-13T05:00:32.008081691Z" level=info msg="StartContainer for \"88c1e2588225fed7117313cc257d34304ec1a9dadfb6d6cd64f300f1824bce26\" returns successfully" Oct 13 05:00:32.468024 kubelet[3530]: E1013 05:00:32.467940 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:32.468024 kubelet[3530]: W1013 05:00:32.467966 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:32.468024 kubelet[3530]: E1013 05:00:32.467986 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:32.468629 kubelet[3530]: E1013 05:00:32.468546 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:32.468629 kubelet[3530]: W1013 05:00:32.468559 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:32.468629 kubelet[3530]: E1013 05:00:32.468591 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:32.468890 kubelet[3530]: E1013 05:00:32.468878 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:32.469003 kubelet[3530]: W1013 05:00:32.468948 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:32.469003 kubelet[3530]: E1013 05:00:32.468965 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:32.469206 kubelet[3530]: E1013 05:00:32.469196 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:32.469273 kubelet[3530]: W1013 05:00:32.469261 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:32.469329 kubelet[3530]: E1013 05:00:32.469315 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:32.469578 kubelet[3530]: E1013 05:00:32.469529 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:32.469578 kubelet[3530]: W1013 05:00:32.469539 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:32.469578 kubelet[3530]: E1013 05:00:32.469548 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:32.469841 kubelet[3530]: E1013 05:00:32.469790 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:32.469841 kubelet[3530]: W1013 05:00:32.469801 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:32.469841 kubelet[3530]: E1013 05:00:32.469810 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:32.470107 kubelet[3530]: E1013 05:00:32.470056 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:32.470107 kubelet[3530]: W1013 05:00:32.470066 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:32.470107 kubelet[3530]: E1013 05:00:32.470075 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:32.470385 kubelet[3530]: E1013 05:00:32.470323 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:32.470385 kubelet[3530]: W1013 05:00:32.470333 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:32.470385 kubelet[3530]: E1013 05:00:32.470343 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:32.470663 kubelet[3530]: E1013 05:00:32.470615 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:32.470663 kubelet[3530]: W1013 05:00:32.470626 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:32.470663 kubelet[3530]: E1013 05:00:32.470635 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:32.470928 kubelet[3530]: E1013 05:00:32.470877 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:32.470928 kubelet[3530]: W1013 05:00:32.470888 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:32.470928 kubelet[3530]: E1013 05:00:32.470899 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:32.471194 kubelet[3530]: E1013 05:00:32.471143 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:32.471194 kubelet[3530]: W1013 05:00:32.471153 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:32.471194 kubelet[3530]: E1013 05:00:32.471163 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:32.471457 kubelet[3530]: E1013 05:00:32.471394 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:32.471457 kubelet[3530]: W1013 05:00:32.471405 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:32.471457 kubelet[3530]: E1013 05:00:32.471413 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:32.471718 kubelet[3530]: E1013 05:00:32.471705 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:32.471828 kubelet[3530]: W1013 05:00:32.471779 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:32.471828 kubelet[3530]: E1013 05:00:32.471793 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:32.472077 kubelet[3530]: E1013 05:00:32.472013 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:32.472077 kubelet[3530]: W1013 05:00:32.472025 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:32.472077 kubelet[3530]: E1013 05:00:32.472035 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:32.472350 kubelet[3530]: E1013 05:00:32.472280 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:32.472350 kubelet[3530]: W1013 05:00:32.472293 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:32.472350 kubelet[3530]: E1013 05:00:32.472303 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:32.490699 kubelet[3530]: E1013 05:00:32.490674 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:32.490699 kubelet[3530]: W1013 05:00:32.490692 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:32.490699 kubelet[3530]: E1013 05:00:32.490704 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:32.490867 kubelet[3530]: E1013 05:00:32.490847 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:32.490867 kubelet[3530]: W1013 05:00:32.490859 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:32.490867 kubelet[3530]: E1013 05:00:32.490867 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:32.491029 kubelet[3530]: E1013 05:00:32.491014 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:32.491029 kubelet[3530]: W1013 05:00:32.491024 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:32.491208 kubelet[3530]: E1013 05:00:32.491031 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:32.491299 kubelet[3530]: E1013 05:00:32.491284 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:32.491354 kubelet[3530]: W1013 05:00:32.491342 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:32.491519 kubelet[3530]: E1013 05:00:32.491399 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:32.491638 kubelet[3530]: E1013 05:00:32.491627 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:32.491700 kubelet[3530]: W1013 05:00:32.491688 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:32.491744 kubelet[3530]: E1013 05:00:32.491734 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:32.491924 kubelet[3530]: E1013 05:00:32.491913 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:32.492099 kubelet[3530]: W1013 05:00:32.491981 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:32.492099 kubelet[3530]: E1013 05:00:32.491997 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:32.492225 kubelet[3530]: E1013 05:00:32.492214 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:32.492286 kubelet[3530]: W1013 05:00:32.492276 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:32.492331 kubelet[3530]: E1013 05:00:32.492321 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:32.492556 kubelet[3530]: E1013 05:00:32.492544 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:32.492727 kubelet[3530]: W1013 05:00:32.492622 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:32.492727 kubelet[3530]: E1013 05:00:32.492638 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:32.492852 kubelet[3530]: E1013 05:00:32.492841 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:32.492909 kubelet[3530]: W1013 05:00:32.492898 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:32.492962 kubelet[3530]: E1013 05:00:32.492950 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:32.493243 kubelet[3530]: E1013 05:00:32.493139 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:32.493243 kubelet[3530]: W1013 05:00:32.493149 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:32.493243 kubelet[3530]: E1013 05:00:32.493158 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:32.493406 kubelet[3530]: E1013 05:00:32.493394 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:32.493550 kubelet[3530]: W1013 05:00:32.493437 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:32.493550 kubelet[3530]: E1013 05:00:32.493450 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:32.493834 kubelet[3530]: E1013 05:00:32.493822 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:32.494007 kubelet[3530]: W1013 05:00:32.493885 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:32.494007 kubelet[3530]: E1013 05:00:32.493901 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:32.494132 kubelet[3530]: E1013 05:00:32.494120 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:32.494190 kubelet[3530]: W1013 05:00:32.494179 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:32.494237 kubelet[3530]: E1013 05:00:32.494226 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:32.494456 kubelet[3530]: E1013 05:00:32.494430 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:32.494456 kubelet[3530]: W1013 05:00:32.494442 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:32.494456 kubelet[3530]: E1013 05:00:32.494450 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:32.494723 kubelet[3530]: E1013 05:00:32.494551 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:32.494723 kubelet[3530]: W1013 05:00:32.494556 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:32.494723 kubelet[3530]: E1013 05:00:32.494562 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:32.494723 kubelet[3530]: E1013 05:00:32.494672 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:32.494723 kubelet[3530]: W1013 05:00:32.494677 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:32.494723 kubelet[3530]: E1013 05:00:32.494683 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:32.494981 kubelet[3530]: E1013 05:00:32.494966 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:32.494981 kubelet[3530]: W1013 05:00:32.494977 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:32.495045 kubelet[3530]: E1013 05:00:32.494985 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:32.495112 kubelet[3530]: E1013 05:00:32.495101 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:32.495112 kubelet[3530]: W1013 05:00:32.495109 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:32.495153 kubelet[3530]: E1013 05:00:32.495115 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:33.358259 kubelet[3530]: E1013 05:00:33.358205 3530 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2fn9d" podUID="cc60dacc-3ca6-4b24-9872-16cd8a93e18d" Oct 13 05:00:33.434205 kubelet[3530]: I1013 05:00:33.434031 3530 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:00:33.479011 kubelet[3530]: E1013 05:00:33.478970 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:33.479011 kubelet[3530]: W1013 05:00:33.479003 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:33.479377 kubelet[3530]: E1013 05:00:33.479029 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:33.479377 kubelet[3530]: E1013 05:00:33.479156 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:33.479377 kubelet[3530]: W1013 05:00:33.479163 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:33.479377 kubelet[3530]: E1013 05:00:33.479170 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:33.479377 kubelet[3530]: E1013 05:00:33.479278 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:33.479377 kubelet[3530]: W1013 05:00:33.479290 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:33.479377 kubelet[3530]: E1013 05:00:33.479296 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:33.479668 kubelet[3530]: E1013 05:00:33.479385 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:33.479668 kubelet[3530]: W1013 05:00:33.479390 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:33.479668 kubelet[3530]: E1013 05:00:33.479395 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:33.479668 kubelet[3530]: E1013 05:00:33.479507 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:33.479668 kubelet[3530]: W1013 05:00:33.479513 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:33.479668 kubelet[3530]: E1013 05:00:33.479518 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:33.479668 kubelet[3530]: E1013 05:00:33.479606 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:33.479668 kubelet[3530]: W1013 05:00:33.479611 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:33.479668 kubelet[3530]: E1013 05:00:33.479616 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:33.480023 kubelet[3530]: E1013 05:00:33.479698 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:33.480023 kubelet[3530]: W1013 05:00:33.479703 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:33.480023 kubelet[3530]: E1013 05:00:33.479707 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:33.480023 kubelet[3530]: E1013 05:00:33.479835 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:33.480023 kubelet[3530]: W1013 05:00:33.479840 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:33.480023 kubelet[3530]: E1013 05:00:33.479845 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:33.480023 kubelet[3530]: E1013 05:00:33.480009 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:33.480023 kubelet[3530]: W1013 05:00:33.480016 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:33.480023 kubelet[3530]: E1013 05:00:33.480023 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:33.480246 kubelet[3530]: E1013 05:00:33.480229 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:33.480246 kubelet[3530]: W1013 05:00:33.480239 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:33.480246 kubelet[3530]: E1013 05:00:33.480247 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:33.480430 kubelet[3530]: E1013 05:00:33.480417 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:33.480430 kubelet[3530]: W1013 05:00:33.480427 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:33.481588 kubelet[3530]: E1013 05:00:33.480435 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:33.481588 kubelet[3530]: E1013 05:00:33.480682 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:33.481588 kubelet[3530]: W1013 05:00:33.480692 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:33.481588 kubelet[3530]: E1013 05:00:33.480738 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:33.481588 kubelet[3530]: E1013 05:00:33.480885 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:33.481588 kubelet[3530]: W1013 05:00:33.480892 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:33.481588 kubelet[3530]: E1013 05:00:33.480899 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:33.481588 kubelet[3530]: E1013 05:00:33.481003 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:33.481588 kubelet[3530]: W1013 05:00:33.481008 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:33.481588 kubelet[3530]: E1013 05:00:33.481029 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:33.481735 kubelet[3530]: E1013 05:00:33.481129 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:33.481735 kubelet[3530]: W1013 05:00:33.481135 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:33.481735 kubelet[3530]: E1013 05:00:33.481140 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:33.497782 kubelet[3530]: E1013 05:00:33.497709 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:33.497782 kubelet[3530]: W1013 05:00:33.497726 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:33.497782 kubelet[3530]: E1013 05:00:33.497746 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:33.497962 kubelet[3530]: E1013 05:00:33.497947 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:33.497962 kubelet[3530]: W1013 05:00:33.497959 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:33.498017 kubelet[3530]: E1013 05:00:33.497968 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:33.498124 kubelet[3530]: E1013 05:00:33.498112 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:33.498124 kubelet[3530]: W1013 05:00:33.498121 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:33.498173 kubelet[3530]: E1013 05:00:33.498129 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:33.498276 kubelet[3530]: E1013 05:00:33.498264 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:33.498276 kubelet[3530]: W1013 05:00:33.498272 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:33.498330 kubelet[3530]: E1013 05:00:33.498279 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:33.498383 kubelet[3530]: E1013 05:00:33.498368 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:33.498383 kubelet[3530]: W1013 05:00:33.498375 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:33.498383 kubelet[3530]: E1013 05:00:33.498381 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:33.498503 kubelet[3530]: E1013 05:00:33.498456 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:33.498503 kubelet[3530]: W1013 05:00:33.498461 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:33.498503 kubelet[3530]: E1013 05:00:33.498466 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:33.498595 kubelet[3530]: E1013 05:00:33.498585 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:33.498595 kubelet[3530]: W1013 05:00:33.498591 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:33.498642 kubelet[3530]: E1013 05:00:33.498596 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:33.498934 kubelet[3530]: E1013 05:00:33.498863 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:33.498934 kubelet[3530]: W1013 05:00:33.498879 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:33.498934 kubelet[3530]: E1013 05:00:33.498891 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:33.499201 kubelet[3530]: E1013 05:00:33.499143 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:33.499201 kubelet[3530]: W1013 05:00:33.499153 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:33.499201 kubelet[3530]: E1013 05:00:33.499163 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:33.499492 kubelet[3530]: E1013 05:00:33.499411 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:33.499492 kubelet[3530]: W1013 05:00:33.499421 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:33.499492 kubelet[3530]: E1013 05:00:33.499430 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:33.499718 kubelet[3530]: E1013 05:00:33.499705 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:33.499842 kubelet[3530]: W1013 05:00:33.499779 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:33.499842 kubelet[3530]: E1013 05:00:33.499795 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:33.500034 kubelet[3530]: E1013 05:00:33.500023 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:33.500159 kubelet[3530]: W1013 05:00:33.500089 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:33.500159 kubelet[3530]: E1013 05:00:33.500105 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:33.500340 kubelet[3530]: E1013 05:00:33.500329 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:33.500417 kubelet[3530]: W1013 05:00:33.500406 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:33.500463 kubelet[3530]: E1013 05:00:33.500453 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:33.500701 kubelet[3530]: E1013 05:00:33.500685 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:33.500701 kubelet[3530]: W1013 05:00:33.500696 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:33.500812 kubelet[3530]: E1013 05:00:33.500706 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:33.500812 kubelet[3530]: E1013 05:00:33.500799 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:33.500812 kubelet[3530]: W1013 05:00:33.500804 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:33.500812 kubelet[3530]: E1013 05:00:33.500810 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:33.500928 kubelet[3530]: E1013 05:00:33.500912 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:33.500928 kubelet[3530]: W1013 05:00:33.500917 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:33.500928 kubelet[3530]: E1013 05:00:33.500923 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:33.501149 kubelet[3530]: E1013 05:00:33.501136 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:33.501225 kubelet[3530]: W1013 05:00:33.501212 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:33.501277 kubelet[3530]: E1013 05:00:33.501266 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:33.501534 kubelet[3530]: E1013 05:00:33.501497 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:00:33.501534 kubelet[3530]: W1013 05:00:33.501508 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:00:33.501534 kubelet[3530]: E1013 05:00:33.501517 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:00:35.358431 kubelet[3530]: E1013 05:00:35.358127 3530 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2fn9d" podUID="cc60dacc-3ca6-4b24-9872-16cd8a93e18d" Oct 13 05:00:37.358283 kubelet[3530]: E1013 05:00:37.358232 3530 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2fn9d" podUID="cc60dacc-3ca6-4b24-9872-16cd8a93e18d" Oct 13 05:00:37.975760 containerd[1982]: time="2025-10-13T05:00:37.975711350Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:00:37.978163 containerd[1982]: time="2025-10-13T05:00:37.978135562Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Oct 13 05:00:37.981507 containerd[1982]: time="2025-10-13T05:00:37.981212792Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:00:37.986382 containerd[1982]: time="2025-10-13T05:00:37.986351776Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:00:37.987098 containerd[1982]: time="2025-10-13T05:00:37.987068876Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 6.100513366s" Oct 13 05:00:37.987098 containerd[1982]: time="2025-10-13T05:00:37.987098077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Oct 13 05:00:37.994857 containerd[1982]: time="2025-10-13T05:00:37.994825845Z" level=info msg="CreateContainer within sandbox \"09efbb7023df065b8e6515c66ba36afb0da288060ac49e4a4c0fa55900b32a80\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 13 05:00:38.021878 containerd[1982]: time="2025-10-13T05:00:38.021835441Z" level=info msg="Container 2cfb2f04c599ee30252fc95aea902edb7e0c4d479c25a2b8004dceee2af86986: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:00:38.039229 containerd[1982]: time="2025-10-13T05:00:38.039182751Z" level=info msg="CreateContainer within sandbox \"09efbb7023df065b8e6515c66ba36afb0da288060ac49e4a4c0fa55900b32a80\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2cfb2f04c599ee30252fc95aea902edb7e0c4d479c25a2b8004dceee2af86986\"" Oct 13 05:00:38.039735 containerd[1982]: time="2025-10-13T05:00:38.039712950Z" level=info msg="StartContainer for \"2cfb2f04c599ee30252fc95aea902edb7e0c4d479c25a2b8004dceee2af86986\"" Oct 13 05:00:38.041757 containerd[1982]: time="2025-10-13T05:00:38.041730303Z" level=info msg="connecting to shim 2cfb2f04c599ee30252fc95aea902edb7e0c4d479c25a2b8004dceee2af86986" address="unix:///run/containerd/s/48479be5b77eb14c2bf1d5cfa984fdcf9d68a051b911e987703dfadebe2ce212" protocol=ttrpc version=3 Oct 13 05:00:38.058613 systemd[1]: Started cri-containerd-2cfb2f04c599ee30252fc95aea902edb7e0c4d479c25a2b8004dceee2af86986.scope - libcontainer container 2cfb2f04c599ee30252fc95aea902edb7e0c4d479c25a2b8004dceee2af86986. Oct 13 05:00:38.089809 containerd[1982]: time="2025-10-13T05:00:38.089705630Z" level=info msg="StartContainer for \"2cfb2f04c599ee30252fc95aea902edb7e0c4d479c25a2b8004dceee2af86986\" returns successfully" Oct 13 05:00:38.097627 systemd[1]: cri-containerd-2cfb2f04c599ee30252fc95aea902edb7e0c4d479c25a2b8004dceee2af86986.scope: Deactivated successfully. Oct 13 05:00:38.097989 containerd[1982]: time="2025-10-13T05:00:38.097862075Z" level=info msg="received exit event container_id:\"2cfb2f04c599ee30252fc95aea902edb7e0c4d479c25a2b8004dceee2af86986\" id:\"2cfb2f04c599ee30252fc95aea902edb7e0c4d479c25a2b8004dceee2af86986\" pid:4190 exited_at:{seconds:1760331638 nanos:97385365}" Oct 13 05:00:38.099647 containerd[1982]: time="2025-10-13T05:00:38.099564914Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2cfb2f04c599ee30252fc95aea902edb7e0c4d479c25a2b8004dceee2af86986\" id:\"2cfb2f04c599ee30252fc95aea902edb7e0c4d479c25a2b8004dceee2af86986\" pid:4190 exited_at:{seconds:1760331638 nanos:97385365}" Oct 13 05:00:38.115236 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2cfb2f04c599ee30252fc95aea902edb7e0c4d479c25a2b8004dceee2af86986-rootfs.mount: Deactivated successfully. Oct 13 05:00:38.460365 kubelet[3530]: I1013 05:00:38.460289 3530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-cdf5d7b88-gnrsr" podStartSLOduration=7.677396515 podStartE2EDuration="10.460275985s" podCreationTimestamp="2025-10-13 05:00:28 +0000 UTC" firstStartedPulling="2025-10-13 05:00:29.102907786 +0000 UTC m=+18.828942429" lastFinishedPulling="2025-10-13 05:00:31.885787256 +0000 UTC m=+21.611821899" observedRunningTime="2025-10-13 05:00:32.444108487 +0000 UTC m=+22.170143130" watchObservedRunningTime="2025-10-13 05:00:38.460275985 +0000 UTC m=+28.186310628" Oct 13 05:00:39.358265 kubelet[3530]: E1013 05:00:39.358208 3530 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2fn9d" podUID="cc60dacc-3ca6-4b24-9872-16cd8a93e18d" Oct 13 05:00:39.448553 containerd[1982]: time="2025-10-13T05:00:39.448464324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Oct 13 05:00:41.358356 kubelet[3530]: E1013 05:00:41.358298 3530 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2fn9d" podUID="cc60dacc-3ca6-4b24-9872-16cd8a93e18d" Oct 13 05:00:43.358102 kubelet[3530]: E1013 05:00:43.358046 3530 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2fn9d" podUID="cc60dacc-3ca6-4b24-9872-16cd8a93e18d" Oct 13 05:00:44.460124 containerd[1982]: time="2025-10-13T05:00:44.459641219Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:00:44.464426 containerd[1982]: time="2025-10-13T05:00:44.464398919Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Oct 13 05:00:44.467793 containerd[1982]: time="2025-10-13T05:00:44.467763645Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:00:44.471567 containerd[1982]: time="2025-10-13T05:00:44.471528917Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:00:44.472444 containerd[1982]: time="2025-10-13T05:00:44.472362372Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 5.023758765s" Oct 13 05:00:44.472444 containerd[1982]: time="2025-10-13T05:00:44.472388653Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Oct 13 05:00:44.478943 containerd[1982]: time="2025-10-13T05:00:44.478914194Z" level=info msg="CreateContainer within sandbox \"09efbb7023df065b8e6515c66ba36afb0da288060ac49e4a4c0fa55900b32a80\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 13 05:00:44.498493 containerd[1982]: time="2025-10-13T05:00:44.497700675Z" level=info msg="Container 79589159cfd745751f17b4394ac6393d81463cd76a1b1c4deb40d26c8d4aa092: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:00:44.515653 containerd[1982]: time="2025-10-13T05:00:44.515566930Z" level=info msg="CreateContainer within sandbox \"09efbb7023df065b8e6515c66ba36afb0da288060ac49e4a4c0fa55900b32a80\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"79589159cfd745751f17b4394ac6393d81463cd76a1b1c4deb40d26c8d4aa092\"" Oct 13 05:00:44.516272 containerd[1982]: time="2025-10-13T05:00:44.516204628Z" level=info msg="StartContainer for \"79589159cfd745751f17b4394ac6393d81463cd76a1b1c4deb40d26c8d4aa092\"" Oct 13 05:00:44.517550 containerd[1982]: time="2025-10-13T05:00:44.517524640Z" level=info msg="connecting to shim 79589159cfd745751f17b4394ac6393d81463cd76a1b1c4deb40d26c8d4aa092" address="unix:///run/containerd/s/48479be5b77eb14c2bf1d5cfa984fdcf9d68a051b911e987703dfadebe2ce212" protocol=ttrpc version=3 Oct 13 05:00:44.534613 systemd[1]: Started cri-containerd-79589159cfd745751f17b4394ac6393d81463cd76a1b1c4deb40d26c8d4aa092.scope - libcontainer container 79589159cfd745751f17b4394ac6393d81463cd76a1b1c4deb40d26c8d4aa092. Oct 13 05:00:44.567566 containerd[1982]: time="2025-10-13T05:00:44.567454328Z" level=info msg="StartContainer for \"79589159cfd745751f17b4394ac6393d81463cd76a1b1c4deb40d26c8d4aa092\" returns successfully" Oct 13 05:00:45.357643 kubelet[3530]: E1013 05:00:45.357588 3530 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2fn9d" podUID="cc60dacc-3ca6-4b24-9872-16cd8a93e18d" Oct 13 05:00:45.676776 containerd[1982]: time="2025-10-13T05:00:45.676505040Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 13 05:00:45.679070 systemd[1]: cri-containerd-79589159cfd745751f17b4394ac6393d81463cd76a1b1c4deb40d26c8d4aa092.scope: Deactivated successfully. Oct 13 05:00:45.679321 systemd[1]: cri-containerd-79589159cfd745751f17b4394ac6393d81463cd76a1b1c4deb40d26c8d4aa092.scope: Consumed 311ms CPU time, 185.3M memory peak, 165.8M written to disk. Oct 13 05:00:45.680495 containerd[1982]: time="2025-10-13T05:00:45.680444237Z" level=info msg="received exit event container_id:\"79589159cfd745751f17b4394ac6393d81463cd76a1b1c4deb40d26c8d4aa092\" id:\"79589159cfd745751f17b4394ac6393d81463cd76a1b1c4deb40d26c8d4aa092\" pid:4250 exited_at:{seconds:1760331645 nanos:680004721}" Oct 13 05:00:45.680608 containerd[1982]: time="2025-10-13T05:00:45.680444541Z" level=info msg="TaskExit event in podsandbox handler container_id:\"79589159cfd745751f17b4394ac6393d81463cd76a1b1c4deb40d26c8d4aa092\" id:\"79589159cfd745751f17b4394ac6393d81463cd76a1b1c4deb40d26c8d4aa092\" pid:4250 exited_at:{seconds:1760331645 nanos:680004721}" Oct 13 05:00:45.696633 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79589159cfd745751f17b4394ac6393d81463cd76a1b1c4deb40d26c8d4aa092-rootfs.mount: Deactivated successfully. Oct 13 05:00:45.750103 kubelet[3530]: I1013 05:00:45.749510 3530 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 13 05:00:46.542228 systemd[1]: Created slice kubepods-besteffort-pod20bc3b45_f2f2_4e7c_b65b_29618e3db751.slice - libcontainer container kubepods-besteffort-pod20bc3b45_f2f2_4e7c_b65b_29618e3db751.slice. Oct 13 05:00:46.551555 systemd[1]: Created slice kubepods-besteffort-pod1ab6c81d_307d_4aad_af59_25a7f5638111.slice - libcontainer container kubepods-besteffort-pod1ab6c81d_307d_4aad_af59_25a7f5638111.slice. Oct 13 05:00:46.565879 systemd[1]: Created slice kubepods-besteffort-pod63beb110_82e9_4863_b11d_6264a263f694.slice - libcontainer container kubepods-besteffort-pod63beb110_82e9_4863_b11d_6264a263f694.slice. Oct 13 05:00:46.573893 kubelet[3530]: I1013 05:00:46.573729 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75340111-1f4a-4034-883e-b1d34f800692-config-volume\") pod \"coredns-674b8bbfcf-p9228\" (UID: \"75340111-1f4a-4034-883e-b1d34f800692\") " pod="kube-system/coredns-674b8bbfcf-p9228" Oct 13 05:00:46.574989 kubelet[3530]: I1013 05:00:46.574460 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/20bc3b45-f2f2-4e7c-b65b-29618e3db751-whisker-backend-key-pair\") pod \"whisker-56555f9569-zlmrn\" (UID: \"20bc3b45-f2f2-4e7c-b65b-29618e3db751\") " pod="calico-system/whisker-56555f9569-zlmrn" Oct 13 05:00:46.574989 kubelet[3530]: I1013 05:00:46.574958 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20bc3b45-f2f2-4e7c-b65b-29618e3db751-whisker-ca-bundle\") pod \"whisker-56555f9569-zlmrn\" (UID: \"20bc3b45-f2f2-4e7c-b65b-29618e3db751\") " pod="calico-system/whisker-56555f9569-zlmrn" Oct 13 05:00:46.575560 systemd[1]: Created slice kubepods-burstable-pod2557da88_1ce8_4cce_bf44_406de3bbb345.slice - libcontainer container kubepods-burstable-pod2557da88_1ce8_4cce_bf44_406de3bbb345.slice. Oct 13 05:00:46.575770 kubelet[3530]: I1013 05:00:46.574978 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wvkz\" (UniqueName: \"kubernetes.io/projected/20bc3b45-f2f2-4e7c-b65b-29618e3db751-kube-api-access-5wvkz\") pod \"whisker-56555f9569-zlmrn\" (UID: \"20bc3b45-f2f2-4e7c-b65b-29618e3db751\") " pod="calico-system/whisker-56555f9569-zlmrn" Oct 13 05:00:46.575949 kubelet[3530]: I1013 05:00:46.575898 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj5mn\" (UniqueName: \"kubernetes.io/projected/1ab6c81d-307d-4aad-af59-25a7f5638111-kube-api-access-jj5mn\") pod \"calico-kube-controllers-6586bcd76c-whz9f\" (UID: \"1ab6c81d-307d-4aad-af59-25a7f5638111\") " pod="calico-system/calico-kube-controllers-6586bcd76c-whz9f" Oct 13 05:00:46.575949 kubelet[3530]: I1013 05:00:46.575922 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9cgm\" (UniqueName: \"kubernetes.io/projected/75340111-1f4a-4034-883e-b1d34f800692-kube-api-access-f9cgm\") pod \"coredns-674b8bbfcf-p9228\" (UID: \"75340111-1f4a-4034-883e-b1d34f800692\") " pod="kube-system/coredns-674b8bbfcf-p9228" Oct 13 05:00:46.576214 kubelet[3530]: I1013 05:00:46.576199 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ab6c81d-307d-4aad-af59-25a7f5638111-tigera-ca-bundle\") pod \"calico-kube-controllers-6586bcd76c-whz9f\" (UID: \"1ab6c81d-307d-4aad-af59-25a7f5638111\") " pod="calico-system/calico-kube-controllers-6586bcd76c-whz9f" Oct 13 05:00:46.582977 systemd[1]: Created slice kubepods-besteffort-pod3b6f603c_580d_4f78_9055_03c49a4c91c0.slice - libcontainer container kubepods-besteffort-pod3b6f603c_580d_4f78_9055_03c49a4c91c0.slice. Oct 13 05:00:46.602499 systemd[1]: Created slice kubepods-besteffort-pod62eb588d_e844_4b88_bdae_4ca98aff9ba0.slice - libcontainer container kubepods-besteffort-pod62eb588d_e844_4b88_bdae_4ca98aff9ba0.slice. Oct 13 05:00:46.607713 systemd[1]: Created slice kubepods-burstable-pod75340111_1f4a_4034_883e_b1d34f800692.slice - libcontainer container kubepods-burstable-pod75340111_1f4a_4034_883e_b1d34f800692.slice. Oct 13 05:00:46.678248 kubelet[3530]: I1013 05:00:46.678203 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpcjx\" (UniqueName: \"kubernetes.io/projected/2557da88-1ce8-4cce-bf44-406de3bbb345-kube-api-access-dpcjx\") pod \"coredns-674b8bbfcf-2pm2x\" (UID: \"2557da88-1ce8-4cce-bf44-406de3bbb345\") " pod="kube-system/coredns-674b8bbfcf-2pm2x" Oct 13 05:00:46.681566 kubelet[3530]: I1013 05:00:46.679737 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg2kn\" (UniqueName: \"kubernetes.io/projected/62eb588d-e844-4b88-bdae-4ca98aff9ba0-kube-api-access-tg2kn\") pod \"goldmane-54d579b49d-vf59k\" (UID: \"62eb588d-e844-4b88-bdae-4ca98aff9ba0\") " pod="calico-system/goldmane-54d579b49d-vf59k" Oct 13 05:00:46.681566 kubelet[3530]: I1013 05:00:46.679766 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/63beb110-82e9-4863-b11d-6264a263f694-calico-apiserver-certs\") pod \"calico-apiserver-5567d8994c-289xc\" (UID: \"63beb110-82e9-4863-b11d-6264a263f694\") " pod="calico-apiserver/calico-apiserver-5567d8994c-289xc" Oct 13 05:00:46.681566 kubelet[3530]: I1013 05:00:46.679778 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhh6x\" (UniqueName: \"kubernetes.io/projected/63beb110-82e9-4863-b11d-6264a263f694-kube-api-access-mhh6x\") pod \"calico-apiserver-5567d8994c-289xc\" (UID: \"63beb110-82e9-4863-b11d-6264a263f694\") " pod="calico-apiserver/calico-apiserver-5567d8994c-289xc" Oct 13 05:00:46.681566 kubelet[3530]: I1013 05:00:46.679789 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2557da88-1ce8-4cce-bf44-406de3bbb345-config-volume\") pod \"coredns-674b8bbfcf-2pm2x\" (UID: \"2557da88-1ce8-4cce-bf44-406de3bbb345\") " pod="kube-system/coredns-674b8bbfcf-2pm2x" Oct 13 05:00:46.681566 kubelet[3530]: I1013 05:00:46.679800 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62eb588d-e844-4b88-bdae-4ca98aff9ba0-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-vf59k\" (UID: \"62eb588d-e844-4b88-bdae-4ca98aff9ba0\") " pod="calico-system/goldmane-54d579b49d-vf59k" Oct 13 05:00:46.681732 kubelet[3530]: I1013 05:00:46.679812 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nlcl\" (UniqueName: \"kubernetes.io/projected/3b6f603c-580d-4f78-9055-03c49a4c91c0-kube-api-access-2nlcl\") pod \"calico-apiserver-5567d8994c-qldmt\" (UID: \"3b6f603c-580d-4f78-9055-03c49a4c91c0\") " pod="calico-apiserver/calico-apiserver-5567d8994c-qldmt" Oct 13 05:00:46.681732 kubelet[3530]: I1013 05:00:46.679823 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/62eb588d-e844-4b88-bdae-4ca98aff9ba0-goldmane-key-pair\") pod \"goldmane-54d579b49d-vf59k\" (UID: \"62eb588d-e844-4b88-bdae-4ca98aff9ba0\") " pod="calico-system/goldmane-54d579b49d-vf59k" Oct 13 05:00:46.681732 kubelet[3530]: I1013 05:00:46.679839 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62eb588d-e844-4b88-bdae-4ca98aff9ba0-config\") pod \"goldmane-54d579b49d-vf59k\" (UID: \"62eb588d-e844-4b88-bdae-4ca98aff9ba0\") " pod="calico-system/goldmane-54d579b49d-vf59k" Oct 13 05:00:46.681732 kubelet[3530]: I1013 05:00:46.679853 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3b6f603c-580d-4f78-9055-03c49a4c91c0-calico-apiserver-certs\") pod \"calico-apiserver-5567d8994c-qldmt\" (UID: \"3b6f603c-580d-4f78-9055-03c49a4c91c0\") " pod="calico-apiserver/calico-apiserver-5567d8994c-qldmt" Oct 13 05:00:46.847034 containerd[1982]: time="2025-10-13T05:00:46.846916701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56555f9569-zlmrn,Uid:20bc3b45-f2f2-4e7c-b65b-29618e3db751,Namespace:calico-system,Attempt:0,}" Oct 13 05:00:46.862270 containerd[1982]: time="2025-10-13T05:00:46.862214917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6586bcd76c-whz9f,Uid:1ab6c81d-307d-4aad-af59-25a7f5638111,Namespace:calico-system,Attempt:0,}" Oct 13 05:00:46.869623 containerd[1982]: time="2025-10-13T05:00:46.869587033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5567d8994c-289xc,Uid:63beb110-82e9-4863-b11d-6264a263f694,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:00:46.881091 containerd[1982]: time="2025-10-13T05:00:46.881062719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2pm2x,Uid:2557da88-1ce8-4cce-bf44-406de3bbb345,Namespace:kube-system,Attempt:0,}" Oct 13 05:00:46.887520 containerd[1982]: time="2025-10-13T05:00:46.887457529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5567d8994c-qldmt,Uid:3b6f603c-580d-4f78-9055-03c49a4c91c0,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:00:46.906465 containerd[1982]: time="2025-10-13T05:00:46.906295587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-vf59k,Uid:62eb588d-e844-4b88-bdae-4ca98aff9ba0,Namespace:calico-system,Attempt:0,}" Oct 13 05:00:46.916654 containerd[1982]: time="2025-10-13T05:00:46.916622713Z" level=error msg="Failed to destroy network for sandbox \"7e3146a1209cf98cd01d6e4a3144604af14f28bb46bc7fc2cedbc0090671615b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:00:46.920318 containerd[1982]: time="2025-10-13T05:00:46.920291911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p9228,Uid:75340111-1f4a-4034-883e-b1d34f800692,Namespace:kube-system,Attempt:0,}" Oct 13 05:00:46.920697 containerd[1982]: time="2025-10-13T05:00:46.920568407Z" level=error msg="Failed to destroy network for sandbox \"59f407314b44bf8336f4c152970c837eaaf2a3133b0ddeefefa9bdf8fa72b584\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:00:46.920822 containerd[1982]: time="2025-10-13T05:00:46.920797517Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56555f9569-zlmrn,Uid:20bc3b45-f2f2-4e7c-b65b-29618e3db751,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e3146a1209cf98cd01d6e4a3144604af14f28bb46bc7fc2cedbc0090671615b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:00:46.921687 kubelet[3530]: E1013 05:00:46.921644 3530 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e3146a1209cf98cd01d6e4a3144604af14f28bb46bc7fc2cedbc0090671615b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:00:46.921775 kubelet[3530]: E1013 05:00:46.921713 3530 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e3146a1209cf98cd01d6e4a3144604af14f28bb46bc7fc2cedbc0090671615b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-56555f9569-zlmrn" Oct 13 05:00:46.921775 kubelet[3530]: E1013 05:00:46.921729 3530 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e3146a1209cf98cd01d6e4a3144604af14f28bb46bc7fc2cedbc0090671615b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-56555f9569-zlmrn" Oct 13 05:00:46.921851 kubelet[3530]: E1013 05:00:46.921768 3530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-56555f9569-zlmrn_calico-system(20bc3b45-f2f2-4e7c-b65b-29618e3db751)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-56555f9569-zlmrn_calico-system(20bc3b45-f2f2-4e7c-b65b-29618e3db751)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e3146a1209cf98cd01d6e4a3144604af14f28bb46bc7fc2cedbc0090671615b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-56555f9569-zlmrn" podUID="20bc3b45-f2f2-4e7c-b65b-29618e3db751" Oct 13 05:00:46.929296 containerd[1982]: time="2025-10-13T05:00:46.929261864Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6586bcd76c-whz9f,Uid:1ab6c81d-307d-4aad-af59-25a7f5638111,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"59f407314b44bf8336f4c152970c837eaaf2a3133b0ddeefefa9bdf8fa72b584\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:00:46.929833 kubelet[3530]: E1013 05:00:46.929579 3530 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59f407314b44bf8336f4c152970c837eaaf2a3133b0ddeefefa9bdf8fa72b584\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:00:46.929833 kubelet[3530]: E1013 05:00:46.929620 3530 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59f407314b44bf8336f4c152970c837eaaf2a3133b0ddeefefa9bdf8fa72b584\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6586bcd76c-whz9f" Oct 13 05:00:46.929833 kubelet[3530]: E1013 05:00:46.929634 3530 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59f407314b44bf8336f4c152970c837eaaf2a3133b0ddeefefa9bdf8fa72b584\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6586bcd76c-whz9f" Oct 13 05:00:46.929946 kubelet[3530]: E1013 05:00:46.929669 3530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6586bcd76c-whz9f_calico-system(1ab6c81d-307d-4aad-af59-25a7f5638111)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6586bcd76c-whz9f_calico-system(1ab6c81d-307d-4aad-af59-25a7f5638111)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"59f407314b44bf8336f4c152970c837eaaf2a3133b0ddeefefa9bdf8fa72b584\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6586bcd76c-whz9f" podUID="1ab6c81d-307d-4aad-af59-25a7f5638111" Oct 13 05:00:46.970894 containerd[1982]: time="2025-10-13T05:00:46.970847200Z" level=error msg="Failed to destroy network for sandbox \"f04d3391c922a3d6f9ac500eda7059deec5a27f837e32abfb4985f310bb8002a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:00:46.978608 containerd[1982]: time="2025-10-13T05:00:46.978557774Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5567d8994c-289xc,Uid:63beb110-82e9-4863-b11d-6264a263f694,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f04d3391c922a3d6f9ac500eda7059deec5a27f837e32abfb4985f310bb8002a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:00:46.980086 kubelet[3530]: E1013 05:00:46.979358 3530 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f04d3391c922a3d6f9ac500eda7059deec5a27f837e32abfb4985f310bb8002a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:00:46.980086 kubelet[3530]: E1013 05:00:46.979413 3530 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f04d3391c922a3d6f9ac500eda7059deec5a27f837e32abfb4985f310bb8002a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5567d8994c-289xc" Oct 13 05:00:46.980086 kubelet[3530]: E1013 05:00:46.979435 3530 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f04d3391c922a3d6f9ac500eda7059deec5a27f837e32abfb4985f310bb8002a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5567d8994c-289xc" Oct 13 05:00:46.980201 kubelet[3530]: E1013 05:00:46.979498 3530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5567d8994c-289xc_calico-apiserver(63beb110-82e9-4863-b11d-6264a263f694)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5567d8994c-289xc_calico-apiserver(63beb110-82e9-4863-b11d-6264a263f694)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f04d3391c922a3d6f9ac500eda7059deec5a27f837e32abfb4985f310bb8002a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5567d8994c-289xc" podUID="63beb110-82e9-4863-b11d-6264a263f694" Oct 13 05:00:46.995933 containerd[1982]: time="2025-10-13T05:00:46.995895271Z" level=error msg="Failed to destroy network for sandbox \"024ffedbe258f88f5036a32de586d14adcb7fa1e9c3d09b6126a9eca71f886b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:00:47.002441 containerd[1982]: time="2025-10-13T05:00:47.002393027Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2pm2x,Uid:2557da88-1ce8-4cce-bf44-406de3bbb345,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"024ffedbe258f88f5036a32de586d14adcb7fa1e9c3d09b6126a9eca71f886b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:00:47.003443 kubelet[3530]: E1013 05:00:47.002869 3530 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"024ffedbe258f88f5036a32de586d14adcb7fa1e9c3d09b6126a9eca71f886b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:00:47.003443 kubelet[3530]: E1013 05:00:47.002918 3530 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"024ffedbe258f88f5036a32de586d14adcb7fa1e9c3d09b6126a9eca71f886b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2pm2x" Oct 13 05:00:47.003443 kubelet[3530]: E1013 05:00:47.002934 3530 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"024ffedbe258f88f5036a32de586d14adcb7fa1e9c3d09b6126a9eca71f886b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2pm2x" Oct 13 05:00:47.004423 kubelet[3530]: E1013 05:00:47.002973 3530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-2pm2x_kube-system(2557da88-1ce8-4cce-bf44-406de3bbb345)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-2pm2x_kube-system(2557da88-1ce8-4cce-bf44-406de3bbb345)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"024ffedbe258f88f5036a32de586d14adcb7fa1e9c3d09b6126a9eca71f886b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-2pm2x" podUID="2557da88-1ce8-4cce-bf44-406de3bbb345" Oct 13 05:00:47.012842 containerd[1982]: time="2025-10-13T05:00:47.012802587Z" level=error msg="Failed to destroy network for sandbox \"bdd2de4ec8d33b381e3fc61f8e191d6fc599aeb5a9f606f3d20b910ab569b556\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:00:47.018596 containerd[1982]: time="2025-10-13T05:00:47.018440392Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5567d8994c-qldmt,Uid:3b6f603c-580d-4f78-9055-03c49a4c91c0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdd2de4ec8d33b381e3fc61f8e191d6fc599aeb5a9f606f3d20b910ab569b556\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:00:47.019576 kubelet[3530]: E1013 05:00:47.018912 3530 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdd2de4ec8d33b381e3fc61f8e191d6fc599aeb5a9f606f3d20b910ab569b556\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:00:47.019576 kubelet[3530]: E1013 05:00:47.018963 3530 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdd2de4ec8d33b381e3fc61f8e191d6fc599aeb5a9f606f3d20b910ab569b556\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5567d8994c-qldmt" Oct 13 05:00:47.019576 kubelet[3530]: E1013 05:00:47.018979 3530 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdd2de4ec8d33b381e3fc61f8e191d6fc599aeb5a9f606f3d20b910ab569b556\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5567d8994c-qldmt" Oct 13 05:00:47.019697 kubelet[3530]: E1013 05:00:47.019031 3530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5567d8994c-qldmt_calico-apiserver(3b6f603c-580d-4f78-9055-03c49a4c91c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5567d8994c-qldmt_calico-apiserver(3b6f603c-580d-4f78-9055-03c49a4c91c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bdd2de4ec8d33b381e3fc61f8e191d6fc599aeb5a9f606f3d20b910ab569b556\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5567d8994c-qldmt" podUID="3b6f603c-580d-4f78-9055-03c49a4c91c0" Oct 13 05:00:47.022513 containerd[1982]: time="2025-10-13T05:00:47.022463607Z" level=error msg="Failed to destroy network for sandbox \"e78141ad6282d6c402b213d11ec132029117f408c3cab08cee3712e422db713e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:00:47.026803 containerd[1982]: time="2025-10-13T05:00:47.026762734Z" level=error msg="Failed to destroy network for sandbox \"ce677b18948e986ac0025c626fa923c79218a4ba9ef420e14f18ed2a1e61dc12\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:00:47.026967 containerd[1982]: time="2025-10-13T05:00:47.026941411Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p9228,Uid:75340111-1f4a-4034-883e-b1d34f800692,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e78141ad6282d6c402b213d11ec132029117f408c3cab08cee3712e422db713e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:00:47.027248 kubelet[3530]: E1013 05:00:47.027212 3530 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e78141ad6282d6c402b213d11ec132029117f408c3cab08cee3712e422db713e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:00:47.027291 kubelet[3530]: E1013 05:00:47.027262 3530 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e78141ad6282d6c402b213d11ec132029117f408c3cab08cee3712e422db713e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p9228" Oct 13 05:00:47.027311 kubelet[3530]: E1013 05:00:47.027276 3530 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e78141ad6282d6c402b213d11ec132029117f408c3cab08cee3712e422db713e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p9228" Oct 13 05:00:47.027354 kubelet[3530]: E1013 05:00:47.027330 3530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-p9228_kube-system(75340111-1f4a-4034-883e-b1d34f800692)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-p9228_kube-system(75340111-1f4a-4034-883e-b1d34f800692)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e78141ad6282d6c402b213d11ec132029117f408c3cab08cee3712e422db713e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-p9228" podUID="75340111-1f4a-4034-883e-b1d34f800692" Oct 13 05:00:47.031185 containerd[1982]: time="2025-10-13T05:00:47.031147280Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-vf59k,Uid:62eb588d-e844-4b88-bdae-4ca98aff9ba0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce677b18948e986ac0025c626fa923c79218a4ba9ef420e14f18ed2a1e61dc12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:00:47.031383 kubelet[3530]: E1013 05:00:47.031359 3530 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce677b18948e986ac0025c626fa923c79218a4ba9ef420e14f18ed2a1e61dc12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:00:47.031554 kubelet[3530]: E1013 05:00:47.031460 3530 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce677b18948e986ac0025c626fa923c79218a4ba9ef420e14f18ed2a1e61dc12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-vf59k" Oct 13 05:00:47.031809 kubelet[3530]: E1013 05:00:47.031787 3530 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce677b18948e986ac0025c626fa923c79218a4ba9ef420e14f18ed2a1e61dc12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-vf59k" Oct 13 05:00:47.031928 kubelet[3530]: E1013 05:00:47.031910 3530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-vf59k_calico-system(62eb588d-e844-4b88-bdae-4ca98aff9ba0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-vf59k_calico-system(62eb588d-e844-4b88-bdae-4ca98aff9ba0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce677b18948e986ac0025c626fa923c79218a4ba9ef420e14f18ed2a1e61dc12\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-vf59k" podUID="62eb588d-e844-4b88-bdae-4ca98aff9ba0" Oct 13 05:00:47.362565 systemd[1]: Created slice kubepods-besteffort-podcc60dacc_3ca6_4b24_9872_16cd8a93e18d.slice - libcontainer container kubepods-besteffort-podcc60dacc_3ca6_4b24_9872_16cd8a93e18d.slice. Oct 13 05:00:47.365114 containerd[1982]: time="2025-10-13T05:00:47.365072977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2fn9d,Uid:cc60dacc-3ca6-4b24-9872-16cd8a93e18d,Namespace:calico-system,Attempt:0,}" Oct 13 05:00:47.401704 containerd[1982]: time="2025-10-13T05:00:47.401599741Z" level=error msg="Failed to destroy network for sandbox \"1f218e0dc0432fff965bf4fcc76124d0e3919d76c4787d5cb90625ae749ec41a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:00:47.407118 containerd[1982]: time="2025-10-13T05:00:47.406844239Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2fn9d,Uid:cc60dacc-3ca6-4b24-9872-16cd8a93e18d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f218e0dc0432fff965bf4fcc76124d0e3919d76c4787d5cb90625ae749ec41a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:00:47.407516 kubelet[3530]: E1013 05:00:47.407469 3530 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f218e0dc0432fff965bf4fcc76124d0e3919d76c4787d5cb90625ae749ec41a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:00:47.407673 kubelet[3530]: E1013 05:00:47.407535 3530 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f218e0dc0432fff965bf4fcc76124d0e3919d76c4787d5cb90625ae749ec41a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2fn9d" Oct 13 05:00:47.407673 kubelet[3530]: E1013 05:00:47.407551 3530 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f218e0dc0432fff965bf4fcc76124d0e3919d76c4787d5cb90625ae749ec41a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2fn9d" Oct 13 05:00:47.407673 kubelet[3530]: E1013 05:00:47.407605 3530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2fn9d_calico-system(cc60dacc-3ca6-4b24-9872-16cd8a93e18d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2fn9d_calico-system(cc60dacc-3ca6-4b24-9872-16cd8a93e18d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f218e0dc0432fff965bf4fcc76124d0e3919d76c4787d5cb90625ae749ec41a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2fn9d" podUID="cc60dacc-3ca6-4b24-9872-16cd8a93e18d" Oct 13 05:00:47.468210 containerd[1982]: time="2025-10-13T05:00:47.468163659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Oct 13 05:00:54.559310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount869683714.mount: Deactivated successfully. Oct 13 05:00:54.911409 containerd[1982]: time="2025-10-13T05:00:54.911281273Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:00:54.913975 containerd[1982]: time="2025-10-13T05:00:54.913936211Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Oct 13 05:00:54.917554 containerd[1982]: time="2025-10-13T05:00:54.917508823Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:00:54.926763 containerd[1982]: time="2025-10-13T05:00:54.926715952Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:00:54.927266 containerd[1982]: time="2025-10-13T05:00:54.926998751Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 7.458577438s" Oct 13 05:00:54.927266 containerd[1982]: time="2025-10-13T05:00:54.927038593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Oct 13 05:00:54.949034 containerd[1982]: time="2025-10-13T05:00:54.948993038Z" level=info msg="CreateContainer within sandbox \"09efbb7023df065b8e6515c66ba36afb0da288060ac49e4a4c0fa55900b32a80\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 13 05:00:54.974206 containerd[1982]: time="2025-10-13T05:00:54.973243587Z" level=info msg="Container f26bc2d2c0d9159be10b350eafac89e0842b8ce20fca1b5111b8b1a2f2a7a626: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:00:54.995182 containerd[1982]: time="2025-10-13T05:00:54.995120614Z" level=info msg="CreateContainer within sandbox \"09efbb7023df065b8e6515c66ba36afb0da288060ac49e4a4c0fa55900b32a80\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f26bc2d2c0d9159be10b350eafac89e0842b8ce20fca1b5111b8b1a2f2a7a626\"" Oct 13 05:00:54.995964 containerd[1982]: time="2025-10-13T05:00:54.995794520Z" level=info msg="StartContainer for \"f26bc2d2c0d9159be10b350eafac89e0842b8ce20fca1b5111b8b1a2f2a7a626\"" Oct 13 05:00:54.997653 containerd[1982]: time="2025-10-13T05:00:54.997604299Z" level=info msg="connecting to shim f26bc2d2c0d9159be10b350eafac89e0842b8ce20fca1b5111b8b1a2f2a7a626" address="unix:///run/containerd/s/48479be5b77eb14c2bf1d5cfa984fdcf9d68a051b911e987703dfadebe2ce212" protocol=ttrpc version=3 Oct 13 05:00:55.015618 systemd[1]: Started cri-containerd-f26bc2d2c0d9159be10b350eafac89e0842b8ce20fca1b5111b8b1a2f2a7a626.scope - libcontainer container f26bc2d2c0d9159be10b350eafac89e0842b8ce20fca1b5111b8b1a2f2a7a626. Oct 13 05:00:55.058866 containerd[1982]: time="2025-10-13T05:00:55.058822552Z" level=info msg="StartContainer for \"f26bc2d2c0d9159be10b350eafac89e0842b8ce20fca1b5111b8b1a2f2a7a626\" returns successfully" Oct 13 05:00:55.507592 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 13 05:00:55.508446 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 13 05:00:55.509104 kubelet[3530]: I1013 05:00:55.508856 3530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5gpdg" podStartSLOduration=1.848793618 podStartE2EDuration="27.508841318s" podCreationTimestamp="2025-10-13 05:00:28 +0000 UTC" firstStartedPulling="2025-10-13 05:00:29.267736865 +0000 UTC m=+18.993771508" lastFinishedPulling="2025-10-13 05:00:54.927784565 +0000 UTC m=+44.653819208" observedRunningTime="2025-10-13 05:00:55.508028327 +0000 UTC m=+45.234062978" watchObservedRunningTime="2025-10-13 05:00:55.508841318 +0000 UTC m=+45.234875961" Oct 13 05:00:55.574279 containerd[1982]: time="2025-10-13T05:00:55.574244279Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f26bc2d2c0d9159be10b350eafac89e0842b8ce20fca1b5111b8b1a2f2a7a626\" id:\"ab336698cbd4c585b7e56b80b8af1b35b43229a28f926fee0816a0c01f6f0e4e\" pid:4554 exit_status:1 exited_at:{seconds:1760331655 nanos:573888037}" Oct 13 05:00:55.733879 kubelet[3530]: I1013 05:00:55.733836 3530 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20bc3b45-f2f2-4e7c-b65b-29618e3db751-whisker-ca-bundle\") pod \"20bc3b45-f2f2-4e7c-b65b-29618e3db751\" (UID: \"20bc3b45-f2f2-4e7c-b65b-29618e3db751\") " Oct 13 05:00:55.734036 kubelet[3530]: I1013 05:00:55.733899 3530 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/20bc3b45-f2f2-4e7c-b65b-29618e3db751-whisker-backend-key-pair\") pod \"20bc3b45-f2f2-4e7c-b65b-29618e3db751\" (UID: \"20bc3b45-f2f2-4e7c-b65b-29618e3db751\") " Oct 13 05:00:55.734036 kubelet[3530]: I1013 05:00:55.733916 3530 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wvkz\" (UniqueName: \"kubernetes.io/projected/20bc3b45-f2f2-4e7c-b65b-29618e3db751-kube-api-access-5wvkz\") pod \"20bc3b45-f2f2-4e7c-b65b-29618e3db751\" (UID: \"20bc3b45-f2f2-4e7c-b65b-29618e3db751\") " Oct 13 05:00:55.734830 kubelet[3530]: I1013 05:00:55.734794 3530 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20bc3b45-f2f2-4e7c-b65b-29618e3db751-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "20bc3b45-f2f2-4e7c-b65b-29618e3db751" (UID: "20bc3b45-f2f2-4e7c-b65b-29618e3db751"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 13 05:00:55.739178 systemd[1]: var-lib-kubelet-pods-20bc3b45\x2df2f2\x2d4e7c\x2db65b\x2d29618e3db751-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 13 05:00:55.743130 systemd[1]: var-lib-kubelet-pods-20bc3b45\x2df2f2\x2d4e7c\x2db65b\x2d29618e3db751-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5wvkz.mount: Deactivated successfully. Oct 13 05:00:55.743227 kubelet[3530]: I1013 05:00:55.743148 3530 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20bc3b45-f2f2-4e7c-b65b-29618e3db751-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "20bc3b45-f2f2-4e7c-b65b-29618e3db751" (UID: "20bc3b45-f2f2-4e7c-b65b-29618e3db751"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 13 05:00:55.743972 kubelet[3530]: I1013 05:00:55.743644 3530 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20bc3b45-f2f2-4e7c-b65b-29618e3db751-kube-api-access-5wvkz" (OuterVolumeSpecName: "kube-api-access-5wvkz") pod "20bc3b45-f2f2-4e7c-b65b-29618e3db751" (UID: "20bc3b45-f2f2-4e7c-b65b-29618e3db751"). InnerVolumeSpecName "kube-api-access-5wvkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 05:00:55.835420 kubelet[3530]: I1013 05:00:55.835064 3530 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/20bc3b45-f2f2-4e7c-b65b-29618e3db751-whisker-backend-key-pair\") on node \"ci-4487.0.0-a-bf8a300537\" DevicePath \"\"" Oct 13 05:00:55.835420 kubelet[3530]: I1013 05:00:55.835107 3530 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5wvkz\" (UniqueName: \"kubernetes.io/projected/20bc3b45-f2f2-4e7c-b65b-29618e3db751-kube-api-access-5wvkz\") on node \"ci-4487.0.0-a-bf8a300537\" DevicePath \"\"" Oct 13 05:00:55.835420 kubelet[3530]: I1013 05:00:55.835114 3530 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20bc3b45-f2f2-4e7c-b65b-29618e3db751-whisker-ca-bundle\") on node \"ci-4487.0.0-a-bf8a300537\" DevicePath \"\"" Oct 13 05:00:56.364722 systemd[1]: Removed slice kubepods-besteffort-pod20bc3b45_f2f2_4e7c_b65b_29618e3db751.slice - libcontainer container kubepods-besteffort-pod20bc3b45_f2f2_4e7c_b65b_29618e3db751.slice. Oct 13 05:00:56.564951 containerd[1982]: time="2025-10-13T05:00:56.564845349Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f26bc2d2c0d9159be10b350eafac89e0842b8ce20fca1b5111b8b1a2f2a7a626\" id:\"f6424f315517c443d1ac731645b12ffff26a090168364730952522500bdfa123\" pid:4598 exit_status:1 exited_at:{seconds:1760331656 nanos:564563661}" Oct 13 05:00:56.577703 systemd[1]: Created slice kubepods-besteffort-pod2f921969_7c49_4a23_8f78_9d7f5510afaa.slice - libcontainer container kubepods-besteffort-pod2f921969_7c49_4a23_8f78_9d7f5510afaa.slice. Oct 13 05:00:56.640687 kubelet[3530]: I1013 05:00:56.640295 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2f921969-7c49-4a23-8f78-9d7f5510afaa-whisker-backend-key-pair\") pod \"whisker-6cf45b76dd-ndrgh\" (UID: \"2f921969-7c49-4a23-8f78-9d7f5510afaa\") " pod="calico-system/whisker-6cf45b76dd-ndrgh" Oct 13 05:00:56.641142 kubelet[3530]: I1013 05:00:56.641088 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f921969-7c49-4a23-8f78-9d7f5510afaa-whisker-ca-bundle\") pod \"whisker-6cf45b76dd-ndrgh\" (UID: \"2f921969-7c49-4a23-8f78-9d7f5510afaa\") " pod="calico-system/whisker-6cf45b76dd-ndrgh" Oct 13 05:00:56.641142 kubelet[3530]: I1013 05:00:56.641114 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t44b\" (UniqueName: \"kubernetes.io/projected/2f921969-7c49-4a23-8f78-9d7f5510afaa-kube-api-access-8t44b\") pod \"whisker-6cf45b76dd-ndrgh\" (UID: \"2f921969-7c49-4a23-8f78-9d7f5510afaa\") " pod="calico-system/whisker-6cf45b76dd-ndrgh" Oct 13 05:00:56.883002 containerd[1982]: time="2025-10-13T05:00:56.882960467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cf45b76dd-ndrgh,Uid:2f921969-7c49-4a23-8f78-9d7f5510afaa,Namespace:calico-system,Attempt:0,}" Oct 13 05:00:57.088507 systemd-networkd[1564]: cali536c99b0514: Link UP Oct 13 05:00:57.089701 systemd-networkd[1564]: cali536c99b0514: Gained carrier Oct 13 05:00:57.109529 containerd[1982]: 2025-10-13 05:00:56.919 [INFO][4633] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 05:00:57.109529 containerd[1982]: 2025-10-13 05:00:56.970 [INFO][4633] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--a--bf8a300537-k8s-whisker--6cf45b76dd--ndrgh-eth0 whisker-6cf45b76dd- calico-system 2f921969-7c49-4a23-8f78-9d7f5510afaa 895 0 2025-10-13 05:00:56 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6cf45b76dd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4487.0.0-a-bf8a300537 whisker-6cf45b76dd-ndrgh eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali536c99b0514 [] [] }} ContainerID="f36a307cbc9f4952e79c13856db8e66ba8c7517052733d2ce37325c78f8196f9" Namespace="calico-system" Pod="whisker-6cf45b76dd-ndrgh" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-whisker--6cf45b76dd--ndrgh-" Oct 13 05:00:57.109529 containerd[1982]: 2025-10-13 05:00:56.970 [INFO][4633] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f36a307cbc9f4952e79c13856db8e66ba8c7517052733d2ce37325c78f8196f9" Namespace="calico-system" Pod="whisker-6cf45b76dd-ndrgh" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-whisker--6cf45b76dd--ndrgh-eth0" Oct 13 05:00:57.109529 containerd[1982]: 2025-10-13 05:00:57.006 [INFO][4704] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f36a307cbc9f4952e79c13856db8e66ba8c7517052733d2ce37325c78f8196f9" HandleID="k8s-pod-network.f36a307cbc9f4952e79c13856db8e66ba8c7517052733d2ce37325c78f8196f9" Workload="ci--4487.0.0--a--bf8a300537-k8s-whisker--6cf45b76dd--ndrgh-eth0" Oct 13 05:00:57.110003 containerd[1982]: 2025-10-13 05:00:57.006 [INFO][4704] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f36a307cbc9f4952e79c13856db8e66ba8c7517052733d2ce37325c78f8196f9" HandleID="k8s-pod-network.f36a307cbc9f4952e79c13856db8e66ba8c7517052733d2ce37325c78f8196f9" Workload="ci--4487.0.0--a--bf8a300537-k8s-whisker--6cf45b76dd--ndrgh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024af60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487.0.0-a-bf8a300537", "pod":"whisker-6cf45b76dd-ndrgh", "timestamp":"2025-10-13 05:00:57.006174127 +0000 UTC"}, Hostname:"ci-4487.0.0-a-bf8a300537", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:00:57.110003 containerd[1982]: 2025-10-13 05:00:57.006 [INFO][4704] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:00:57.110003 containerd[1982]: 2025-10-13 05:00:57.006 [INFO][4704] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:00:57.110003 containerd[1982]: 2025-10-13 05:00:57.006 [INFO][4704] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-a-bf8a300537' Oct 13 05:00:57.110003 containerd[1982]: 2025-10-13 05:00:57.018 [INFO][4704] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f36a307cbc9f4952e79c13856db8e66ba8c7517052733d2ce37325c78f8196f9" host="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:57.110003 containerd[1982]: 2025-10-13 05:00:57.022 [INFO][4704] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:57.110003 containerd[1982]: 2025-10-13 05:00:57.028 [INFO][4704] ipam/ipam.go 511: Trying affinity for 192.168.120.128/26 host="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:57.110003 containerd[1982]: 2025-10-13 05:00:57.029 [INFO][4704] ipam/ipam.go 158: Attempting to load block cidr=192.168.120.128/26 host="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:57.110003 containerd[1982]: 2025-10-13 05:00:57.031 [INFO][4704] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.120.128/26 host="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:57.110212 containerd[1982]: 2025-10-13 05:00:57.031 [INFO][4704] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.120.128/26 handle="k8s-pod-network.f36a307cbc9f4952e79c13856db8e66ba8c7517052733d2ce37325c78f8196f9" host="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:57.110212 containerd[1982]: 2025-10-13 05:00:57.033 [INFO][4704] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f36a307cbc9f4952e79c13856db8e66ba8c7517052733d2ce37325c78f8196f9 Oct 13 05:00:57.110212 containerd[1982]: 2025-10-13 05:00:57.040 [INFO][4704] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.120.128/26 handle="k8s-pod-network.f36a307cbc9f4952e79c13856db8e66ba8c7517052733d2ce37325c78f8196f9" host="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:57.110212 containerd[1982]: 2025-10-13 05:00:57.048 [INFO][4704] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.120.129/26] block=192.168.120.128/26 handle="k8s-pod-network.f36a307cbc9f4952e79c13856db8e66ba8c7517052733d2ce37325c78f8196f9" host="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:57.110212 containerd[1982]: 2025-10-13 05:00:57.048 [INFO][4704] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.120.129/26] handle="k8s-pod-network.f36a307cbc9f4952e79c13856db8e66ba8c7517052733d2ce37325c78f8196f9" host="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:57.110212 containerd[1982]: 2025-10-13 05:00:57.048 [INFO][4704] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:00:57.110212 containerd[1982]: 2025-10-13 05:00:57.048 [INFO][4704] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.129/26] IPv6=[] ContainerID="f36a307cbc9f4952e79c13856db8e66ba8c7517052733d2ce37325c78f8196f9" HandleID="k8s-pod-network.f36a307cbc9f4952e79c13856db8e66ba8c7517052733d2ce37325c78f8196f9" Workload="ci--4487.0.0--a--bf8a300537-k8s-whisker--6cf45b76dd--ndrgh-eth0" Oct 13 05:00:57.110322 containerd[1982]: 2025-10-13 05:00:57.052 [INFO][4633] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f36a307cbc9f4952e79c13856db8e66ba8c7517052733d2ce37325c78f8196f9" Namespace="calico-system" Pod="whisker-6cf45b76dd-ndrgh" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-whisker--6cf45b76dd--ndrgh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--bf8a300537-k8s-whisker--6cf45b76dd--ndrgh-eth0", GenerateName:"whisker-6cf45b76dd-", Namespace:"calico-system", SelfLink:"", UID:"2f921969-7c49-4a23-8f78-9d7f5510afaa", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 0, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6cf45b76dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-bf8a300537", ContainerID:"", Pod:"whisker-6cf45b76dd-ndrgh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.120.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali536c99b0514", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:00:57.110322 containerd[1982]: 2025-10-13 05:00:57.052 [INFO][4633] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.129/32] ContainerID="f36a307cbc9f4952e79c13856db8e66ba8c7517052733d2ce37325c78f8196f9" Namespace="calico-system" Pod="whisker-6cf45b76dd-ndrgh" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-whisker--6cf45b76dd--ndrgh-eth0" Oct 13 05:00:57.110380 containerd[1982]: 2025-10-13 05:00:57.052 [INFO][4633] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali536c99b0514 ContainerID="f36a307cbc9f4952e79c13856db8e66ba8c7517052733d2ce37325c78f8196f9" Namespace="calico-system" Pod="whisker-6cf45b76dd-ndrgh" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-whisker--6cf45b76dd--ndrgh-eth0" Oct 13 05:00:57.110380 containerd[1982]: 2025-10-13 05:00:57.089 [INFO][4633] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f36a307cbc9f4952e79c13856db8e66ba8c7517052733d2ce37325c78f8196f9" Namespace="calico-system" Pod="whisker-6cf45b76dd-ndrgh" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-whisker--6cf45b76dd--ndrgh-eth0" Oct 13 05:00:57.110409 containerd[1982]: 2025-10-13 05:00:57.093 [INFO][4633] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f36a307cbc9f4952e79c13856db8e66ba8c7517052733d2ce37325c78f8196f9" Namespace="calico-system" Pod="whisker-6cf45b76dd-ndrgh" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-whisker--6cf45b76dd--ndrgh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--bf8a300537-k8s-whisker--6cf45b76dd--ndrgh-eth0", GenerateName:"whisker-6cf45b76dd-", Namespace:"calico-system", SelfLink:"", UID:"2f921969-7c49-4a23-8f78-9d7f5510afaa", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 0, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6cf45b76dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-bf8a300537", ContainerID:"f36a307cbc9f4952e79c13856db8e66ba8c7517052733d2ce37325c78f8196f9", Pod:"whisker-6cf45b76dd-ndrgh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.120.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali536c99b0514", MAC:"92:60:08:bd:7c:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:00:57.110443 containerd[1982]: 2025-10-13 05:00:57.106 [INFO][4633] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f36a307cbc9f4952e79c13856db8e66ba8c7517052733d2ce37325c78f8196f9" Namespace="calico-system" Pod="whisker-6cf45b76dd-ndrgh" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-whisker--6cf45b76dd--ndrgh-eth0" Oct 13 05:00:57.169629 containerd[1982]: time="2025-10-13T05:00:57.169586446Z" level=info msg="connecting to shim f36a307cbc9f4952e79c13856db8e66ba8c7517052733d2ce37325c78f8196f9" address="unix:///run/containerd/s/f5256cc884c29770796b7a0baeac41117608bb4a94558c6c73a27fbb14723922" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:00:57.199777 systemd[1]: Started cri-containerd-f36a307cbc9f4952e79c13856db8e66ba8c7517052733d2ce37325c78f8196f9.scope - libcontainer container f36a307cbc9f4952e79c13856db8e66ba8c7517052733d2ce37325c78f8196f9. Oct 13 05:00:57.229079 containerd[1982]: time="2025-10-13T05:00:57.229041023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cf45b76dd-ndrgh,Uid:2f921969-7c49-4a23-8f78-9d7f5510afaa,Namespace:calico-system,Attempt:0,} returns sandbox id \"f36a307cbc9f4952e79c13856db8e66ba8c7517052733d2ce37325c78f8196f9\"" Oct 13 05:00:57.237010 containerd[1982]: time="2025-10-13T05:00:57.236980035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Oct 13 05:00:57.359130 containerd[1982]: time="2025-10-13T05:00:57.358996262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6586bcd76c-whz9f,Uid:1ab6c81d-307d-4aad-af59-25a7f5638111,Namespace:calico-system,Attempt:0,}" Oct 13 05:00:57.444292 systemd-networkd[1564]: cali2e818da77c9: Link UP Oct 13 05:00:57.444458 systemd-networkd[1564]: cali2e818da77c9: Gained carrier Oct 13 05:00:57.460577 containerd[1982]: 2025-10-13 05:00:57.382 [INFO][4773] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 05:00:57.460577 containerd[1982]: 2025-10-13 05:00:57.390 [INFO][4773] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--a--bf8a300537-k8s-calico--kube--controllers--6586bcd76c--whz9f-eth0 calico-kube-controllers-6586bcd76c- calico-system 1ab6c81d-307d-4aad-af59-25a7f5638111 821 0 2025-10-13 05:00:29 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6586bcd76c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4487.0.0-a-bf8a300537 calico-kube-controllers-6586bcd76c-whz9f eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2e818da77c9 [] [] }} ContainerID="5cc5b114a057a3881ad98ac3b0e035905af6128fe2d6743bca357b8f40037312" Namespace="calico-system" Pod="calico-kube-controllers-6586bcd76c-whz9f" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-calico--kube--controllers--6586bcd76c--whz9f-" Oct 13 05:00:57.460577 containerd[1982]: 2025-10-13 05:00:57.390 [INFO][4773] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5cc5b114a057a3881ad98ac3b0e035905af6128fe2d6743bca357b8f40037312" Namespace="calico-system" Pod="calico-kube-controllers-6586bcd76c-whz9f" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-calico--kube--controllers--6586bcd76c--whz9f-eth0" Oct 13 05:00:57.460577 containerd[1982]: 2025-10-13 05:00:57.406 [INFO][4785] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5cc5b114a057a3881ad98ac3b0e035905af6128fe2d6743bca357b8f40037312" HandleID="k8s-pod-network.5cc5b114a057a3881ad98ac3b0e035905af6128fe2d6743bca357b8f40037312" Workload="ci--4487.0.0--a--bf8a300537-k8s-calico--kube--controllers--6586bcd76c--whz9f-eth0" Oct 13 05:00:57.460779 containerd[1982]: 2025-10-13 05:00:57.407 [INFO][4785] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5cc5b114a057a3881ad98ac3b0e035905af6128fe2d6743bca357b8f40037312" HandleID="k8s-pod-network.5cc5b114a057a3881ad98ac3b0e035905af6128fe2d6743bca357b8f40037312" Workload="ci--4487.0.0--a--bf8a300537-k8s-calico--kube--controllers--6586bcd76c--whz9f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024aff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487.0.0-a-bf8a300537", "pod":"calico-kube-controllers-6586bcd76c-whz9f", "timestamp":"2025-10-13 05:00:57.406865432 +0000 UTC"}, Hostname:"ci-4487.0.0-a-bf8a300537", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:00:57.460779 containerd[1982]: 2025-10-13 05:00:57.407 [INFO][4785] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:00:57.460779 containerd[1982]: 2025-10-13 05:00:57.407 [INFO][4785] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:00:57.460779 containerd[1982]: 2025-10-13 05:00:57.407 [INFO][4785] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-a-bf8a300537' Oct 13 05:00:57.460779 containerd[1982]: 2025-10-13 05:00:57.415 [INFO][4785] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5cc5b114a057a3881ad98ac3b0e035905af6128fe2d6743bca357b8f40037312" host="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:57.460779 containerd[1982]: 2025-10-13 05:00:57.418 [INFO][4785] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:57.460779 containerd[1982]: 2025-10-13 05:00:57.423 [INFO][4785] ipam/ipam.go 511: Trying affinity for 192.168.120.128/26 host="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:57.460779 containerd[1982]: 2025-10-13 05:00:57.424 [INFO][4785] ipam/ipam.go 158: Attempting to load block cidr=192.168.120.128/26 host="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:57.460779 containerd[1982]: 2025-10-13 05:00:57.426 [INFO][4785] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.120.128/26 host="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:57.460912 containerd[1982]: 2025-10-13 05:00:57.426 [INFO][4785] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.120.128/26 handle="k8s-pod-network.5cc5b114a057a3881ad98ac3b0e035905af6128fe2d6743bca357b8f40037312" host="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:57.460912 containerd[1982]: 2025-10-13 05:00:57.427 [INFO][4785] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5cc5b114a057a3881ad98ac3b0e035905af6128fe2d6743bca357b8f40037312 Oct 13 05:00:57.460912 containerd[1982]: 2025-10-13 05:00:57.433 [INFO][4785] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.120.128/26 handle="k8s-pod-network.5cc5b114a057a3881ad98ac3b0e035905af6128fe2d6743bca357b8f40037312" host="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:57.460912 containerd[1982]: 2025-10-13 05:00:57.438 [INFO][4785] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.120.130/26] block=192.168.120.128/26 handle="k8s-pod-network.5cc5b114a057a3881ad98ac3b0e035905af6128fe2d6743bca357b8f40037312" host="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:57.460912 containerd[1982]: 2025-10-13 05:00:57.438 [INFO][4785] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.120.130/26] handle="k8s-pod-network.5cc5b114a057a3881ad98ac3b0e035905af6128fe2d6743bca357b8f40037312" host="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:57.460912 containerd[1982]: 2025-10-13 05:00:57.438 [INFO][4785] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:00:57.460912 containerd[1982]: 2025-10-13 05:00:57.438 [INFO][4785] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.130/26] IPv6=[] ContainerID="5cc5b114a057a3881ad98ac3b0e035905af6128fe2d6743bca357b8f40037312" HandleID="k8s-pod-network.5cc5b114a057a3881ad98ac3b0e035905af6128fe2d6743bca357b8f40037312" Workload="ci--4487.0.0--a--bf8a300537-k8s-calico--kube--controllers--6586bcd76c--whz9f-eth0" Oct 13 05:00:57.461002 containerd[1982]: 2025-10-13 05:00:57.440 [INFO][4773] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5cc5b114a057a3881ad98ac3b0e035905af6128fe2d6743bca357b8f40037312" Namespace="calico-system" Pod="calico-kube-controllers-6586bcd76c-whz9f" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-calico--kube--controllers--6586bcd76c--whz9f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--bf8a300537-k8s-calico--kube--controllers--6586bcd76c--whz9f-eth0", GenerateName:"calico-kube-controllers-6586bcd76c-", Namespace:"calico-system", SelfLink:"", UID:"1ab6c81d-307d-4aad-af59-25a7f5638111", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 0, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6586bcd76c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-bf8a300537", ContainerID:"", Pod:"calico-kube-controllers-6586bcd76c-whz9f", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.120.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2e818da77c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:00:57.461035 containerd[1982]: 2025-10-13 05:00:57.440 [INFO][4773] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.130/32] ContainerID="5cc5b114a057a3881ad98ac3b0e035905af6128fe2d6743bca357b8f40037312" Namespace="calico-system" Pod="calico-kube-controllers-6586bcd76c-whz9f" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-calico--kube--controllers--6586bcd76c--whz9f-eth0" Oct 13 05:00:57.461035 containerd[1982]: 2025-10-13 05:00:57.440 [INFO][4773] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2e818da77c9 ContainerID="5cc5b114a057a3881ad98ac3b0e035905af6128fe2d6743bca357b8f40037312" Namespace="calico-system" Pod="calico-kube-controllers-6586bcd76c-whz9f" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-calico--kube--controllers--6586bcd76c--whz9f-eth0" Oct 13 05:00:57.461035 containerd[1982]: 2025-10-13 05:00:57.444 [INFO][4773] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5cc5b114a057a3881ad98ac3b0e035905af6128fe2d6743bca357b8f40037312" Namespace="calico-system" Pod="calico-kube-controllers-6586bcd76c-whz9f" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-calico--kube--controllers--6586bcd76c--whz9f-eth0" Oct 13 05:00:57.461075 containerd[1982]: 2025-10-13 05:00:57.444 [INFO][4773] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5cc5b114a057a3881ad98ac3b0e035905af6128fe2d6743bca357b8f40037312" Namespace="calico-system" Pod="calico-kube-controllers-6586bcd76c-whz9f" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-calico--kube--controllers--6586bcd76c--whz9f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--bf8a300537-k8s-calico--kube--controllers--6586bcd76c--whz9f-eth0", GenerateName:"calico-kube-controllers-6586bcd76c-", Namespace:"calico-system", SelfLink:"", UID:"1ab6c81d-307d-4aad-af59-25a7f5638111", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 0, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6586bcd76c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-bf8a300537", ContainerID:"5cc5b114a057a3881ad98ac3b0e035905af6128fe2d6743bca357b8f40037312", Pod:"calico-kube-controllers-6586bcd76c-whz9f", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.120.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2e818da77c9", MAC:"7e:5f:b8:5a:74:3f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:00:57.461109 containerd[1982]: 2025-10-13 05:00:57.459 [INFO][4773] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5cc5b114a057a3881ad98ac3b0e035905af6128fe2d6743bca357b8f40037312" Namespace="calico-system" Pod="calico-kube-controllers-6586bcd76c-whz9f" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-calico--kube--controllers--6586bcd76c--whz9f-eth0" Oct 13 05:00:57.507972 containerd[1982]: time="2025-10-13T05:00:57.507889153Z" level=info msg="connecting to shim 5cc5b114a057a3881ad98ac3b0e035905af6128fe2d6743bca357b8f40037312" address="unix:///run/containerd/s/6bb9438cd072556cb3c2a0eab83b9bc6b008529c9ffb7659b9212c80cdaf9d9c" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:00:57.538641 systemd[1]: Started cri-containerd-5cc5b114a057a3881ad98ac3b0e035905af6128fe2d6743bca357b8f40037312.scope - libcontainer container 5cc5b114a057a3881ad98ac3b0e035905af6128fe2d6743bca357b8f40037312. Oct 13 05:00:57.586671 containerd[1982]: time="2025-10-13T05:00:57.586610845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6586bcd76c-whz9f,Uid:1ab6c81d-307d-4aad-af59-25a7f5638111,Namespace:calico-system,Attempt:0,} returns sandbox id \"5cc5b114a057a3881ad98ac3b0e035905af6128fe2d6743bca357b8f40037312\"" Oct 13 05:00:58.362272 kubelet[3530]: I1013 05:00:58.361955 3530 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20bc3b45-f2f2-4e7c-b65b-29618e3db751" path="/var/lib/kubelet/pods/20bc3b45-f2f2-4e7c-b65b-29618e3db751/volumes" Oct 13 05:00:58.727114 containerd[1982]: time="2025-10-13T05:00:58.726520602Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:00:58.728823 containerd[1982]: time="2025-10-13T05:00:58.728800711Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Oct 13 05:00:58.731503 containerd[1982]: time="2025-10-13T05:00:58.731459302Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:00:58.737512 containerd[1982]: time="2025-10-13T05:00:58.737437255Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:00:58.738444 containerd[1982]: time="2025-10-13T05:00:58.738242700Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.501228288s" Oct 13 05:00:58.738444 containerd[1982]: time="2025-10-13T05:00:58.738274597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Oct 13 05:00:58.740003 containerd[1982]: time="2025-10-13T05:00:58.739708483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Oct 13 05:00:58.745925 containerd[1982]: time="2025-10-13T05:00:58.745898697Z" level=info msg="CreateContainer within sandbox \"f36a307cbc9f4952e79c13856db8e66ba8c7517052733d2ce37325c78f8196f9\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Oct 13 05:00:58.767494 containerd[1982]: time="2025-10-13T05:00:58.765966810Z" level=info msg="Container 1f74a9067a7c34f36a538c50120f03cf5b35ceb26f9eb0f23cd75da6000f32be: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:00:58.783292 containerd[1982]: time="2025-10-13T05:00:58.783237697Z" level=info msg="CreateContainer within sandbox \"f36a307cbc9f4952e79c13856db8e66ba8c7517052733d2ce37325c78f8196f9\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"1f74a9067a7c34f36a538c50120f03cf5b35ceb26f9eb0f23cd75da6000f32be\"" Oct 13 05:00:58.784101 containerd[1982]: time="2025-10-13T05:00:58.783974765Z" level=info msg="StartContainer for \"1f74a9067a7c34f36a538c50120f03cf5b35ceb26f9eb0f23cd75da6000f32be\"" Oct 13 05:00:58.784954 containerd[1982]: time="2025-10-13T05:00:58.784912798Z" level=info msg="connecting to shim 1f74a9067a7c34f36a538c50120f03cf5b35ceb26f9eb0f23cd75da6000f32be" address="unix:///run/containerd/s/f5256cc884c29770796b7a0baeac41117608bb4a94558c6c73a27fbb14723922" protocol=ttrpc version=3 Oct 13 05:00:58.801611 systemd[1]: Started cri-containerd-1f74a9067a7c34f36a538c50120f03cf5b35ceb26f9eb0f23cd75da6000f32be.scope - libcontainer container 1f74a9067a7c34f36a538c50120f03cf5b35ceb26f9eb0f23cd75da6000f32be. Oct 13 05:00:58.825957 kubelet[3530]: I1013 05:00:58.825919 3530 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:00:58.839314 containerd[1982]: time="2025-10-13T05:00:58.839282462Z" level=info msg="StartContainer for \"1f74a9067a7c34f36a538c50120f03cf5b35ceb26f9eb0f23cd75da6000f32be\" returns successfully" Oct 13 05:00:58.926616 systemd-networkd[1564]: cali536c99b0514: Gained IPv6LL Oct 13 05:00:59.054714 systemd-networkd[1564]: cali2e818da77c9: Gained IPv6LL Oct 13 05:00:59.358633 containerd[1982]: time="2025-10-13T05:00:59.358517950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p9228,Uid:75340111-1f4a-4034-883e-b1d34f800692,Namespace:kube-system,Attempt:0,}" Oct 13 05:00:59.519423 systemd-networkd[1564]: cali615d441453a: Link UP Oct 13 05:00:59.521251 systemd-networkd[1564]: cali615d441453a: Gained carrier Oct 13 05:00:59.544689 containerd[1982]: 2025-10-13 05:00:59.398 [INFO][4926] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 05:00:59.544689 containerd[1982]: 2025-10-13 05:00:59.445 [INFO][4926] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--a--bf8a300537-k8s-coredns--674b8bbfcf--p9228-eth0 coredns-674b8bbfcf- kube-system 75340111-1f4a-4034-883e-b1d34f800692 826 0 2025-10-13 05:00:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4487.0.0-a-bf8a300537 coredns-674b8bbfcf-p9228 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali615d441453a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1a9f26e7b5143005af3d508c721164fdeedf6d05d322942dd855ed0e52ea094e" Namespace="kube-system" Pod="coredns-674b8bbfcf-p9228" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-coredns--674b8bbfcf--p9228-" Oct 13 05:00:59.544689 containerd[1982]: 2025-10-13 05:00:59.445 [INFO][4926] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1a9f26e7b5143005af3d508c721164fdeedf6d05d322942dd855ed0e52ea094e" Namespace="kube-system" Pod="coredns-674b8bbfcf-p9228" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-coredns--674b8bbfcf--p9228-eth0" Oct 13 05:00:59.544689 containerd[1982]: 2025-10-13 05:00:59.472 [INFO][4944] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a9f26e7b5143005af3d508c721164fdeedf6d05d322942dd855ed0e52ea094e" HandleID="k8s-pod-network.1a9f26e7b5143005af3d508c721164fdeedf6d05d322942dd855ed0e52ea094e" Workload="ci--4487.0.0--a--bf8a300537-k8s-coredns--674b8bbfcf--p9228-eth0" Oct 13 05:00:59.544929 containerd[1982]: 2025-10-13 05:00:59.472 [INFO][4944] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1a9f26e7b5143005af3d508c721164fdeedf6d05d322942dd855ed0e52ea094e" HandleID="k8s-pod-network.1a9f26e7b5143005af3d508c721164fdeedf6d05d322942dd855ed0e52ea094e" Workload="ci--4487.0.0--a--bf8a300537-k8s-coredns--674b8bbfcf--p9228-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024aff0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4487.0.0-a-bf8a300537", "pod":"coredns-674b8bbfcf-p9228", "timestamp":"2025-10-13 05:00:59.472766953 +0000 UTC"}, Hostname:"ci-4487.0.0-a-bf8a300537", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:00:59.544929 containerd[1982]: 2025-10-13 05:00:59.472 [INFO][4944] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:00:59.544929 containerd[1982]: 2025-10-13 05:00:59.472 [INFO][4944] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:00:59.544929 containerd[1982]: 2025-10-13 05:00:59.472 [INFO][4944] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-a-bf8a300537' Oct 13 05:00:59.544929 containerd[1982]: 2025-10-13 05:00:59.479 [INFO][4944] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1a9f26e7b5143005af3d508c721164fdeedf6d05d322942dd855ed0e52ea094e" host="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:59.544929 containerd[1982]: 2025-10-13 05:00:59.484 [INFO][4944] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:59.544929 containerd[1982]: 2025-10-13 05:00:59.489 [INFO][4944] ipam/ipam.go 511: Trying affinity for 192.168.120.128/26 host="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:59.544929 containerd[1982]: 2025-10-13 05:00:59.491 [INFO][4944] ipam/ipam.go 158: Attempting to load block cidr=192.168.120.128/26 host="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:59.544929 containerd[1982]: 2025-10-13 05:00:59.494 [INFO][4944] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.120.128/26 host="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:59.546179 containerd[1982]: 2025-10-13 05:00:59.494 [INFO][4944] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.120.128/26 handle="k8s-pod-network.1a9f26e7b5143005af3d508c721164fdeedf6d05d322942dd855ed0e52ea094e" host="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:59.546179 containerd[1982]: 2025-10-13 05:00:59.496 [INFO][4944] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1a9f26e7b5143005af3d508c721164fdeedf6d05d322942dd855ed0e52ea094e Oct 13 05:00:59.546179 containerd[1982]: 2025-10-13 05:00:59.501 [INFO][4944] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.120.128/26 handle="k8s-pod-network.1a9f26e7b5143005af3d508c721164fdeedf6d05d322942dd855ed0e52ea094e" host="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:59.546179 containerd[1982]: 2025-10-13 05:00:59.510 [INFO][4944] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.120.131/26] block=192.168.120.128/26 handle="k8s-pod-network.1a9f26e7b5143005af3d508c721164fdeedf6d05d322942dd855ed0e52ea094e" host="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:59.546179 containerd[1982]: 2025-10-13 05:00:59.510 [INFO][4944] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.120.131/26] handle="k8s-pod-network.1a9f26e7b5143005af3d508c721164fdeedf6d05d322942dd855ed0e52ea094e" host="ci-4487.0.0-a-bf8a300537" Oct 13 05:00:59.546179 containerd[1982]: 2025-10-13 05:00:59.510 [INFO][4944] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:00:59.546179 containerd[1982]: 2025-10-13 05:00:59.510 [INFO][4944] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.131/26] IPv6=[] ContainerID="1a9f26e7b5143005af3d508c721164fdeedf6d05d322942dd855ed0e52ea094e" HandleID="k8s-pod-network.1a9f26e7b5143005af3d508c721164fdeedf6d05d322942dd855ed0e52ea094e" Workload="ci--4487.0.0--a--bf8a300537-k8s-coredns--674b8bbfcf--p9228-eth0" Oct 13 05:00:59.546279 containerd[1982]: 2025-10-13 05:00:59.513 [INFO][4926] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1a9f26e7b5143005af3d508c721164fdeedf6d05d322942dd855ed0e52ea094e" Namespace="kube-system" Pod="coredns-674b8bbfcf-p9228" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-coredns--674b8bbfcf--p9228-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--bf8a300537-k8s-coredns--674b8bbfcf--p9228-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"75340111-1f4a-4034-883e-b1d34f800692", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 0, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-bf8a300537", ContainerID:"", Pod:"coredns-674b8bbfcf-p9228", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali615d441453a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:00:59.546279 containerd[1982]: 2025-10-13 05:00:59.513 [INFO][4926] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.131/32] ContainerID="1a9f26e7b5143005af3d508c721164fdeedf6d05d322942dd855ed0e52ea094e" Namespace="kube-system" Pod="coredns-674b8bbfcf-p9228" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-coredns--674b8bbfcf--p9228-eth0" Oct 13 05:00:59.546279 containerd[1982]: 2025-10-13 05:00:59.513 [INFO][4926] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali615d441453a ContainerID="1a9f26e7b5143005af3d508c721164fdeedf6d05d322942dd855ed0e52ea094e" Namespace="kube-system" Pod="coredns-674b8bbfcf-p9228" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-coredns--674b8bbfcf--p9228-eth0" Oct 13 05:00:59.546279 containerd[1982]: 2025-10-13 05:00:59.525 [INFO][4926] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a9f26e7b5143005af3d508c721164fdeedf6d05d322942dd855ed0e52ea094e" Namespace="kube-system" Pod="coredns-674b8bbfcf-p9228" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-coredns--674b8bbfcf--p9228-eth0" Oct 13 05:00:59.546279 containerd[1982]: 2025-10-13 05:00:59.527 [INFO][4926] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1a9f26e7b5143005af3d508c721164fdeedf6d05d322942dd855ed0e52ea094e" Namespace="kube-system" Pod="coredns-674b8bbfcf-p9228" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-coredns--674b8bbfcf--p9228-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--bf8a300537-k8s-coredns--674b8bbfcf--p9228-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"75340111-1f4a-4034-883e-b1d34f800692", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 0, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-bf8a300537", ContainerID:"1a9f26e7b5143005af3d508c721164fdeedf6d05d322942dd855ed0e52ea094e", Pod:"coredns-674b8bbfcf-p9228", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali615d441453a", MAC:"e2:b1:e9:87:bb:9a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:00:59.546279 containerd[1982]: 2025-10-13 05:00:59.541 [INFO][4926] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1a9f26e7b5143005af3d508c721164fdeedf6d05d322942dd855ed0e52ea094e" Namespace="kube-system" Pod="coredns-674b8bbfcf-p9228" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-coredns--674b8bbfcf--p9228-eth0" Oct 13 05:00:59.810381 containerd[1982]: time="2025-10-13T05:00:59.810333057Z" level=info msg="connecting to shim 1a9f26e7b5143005af3d508c721164fdeedf6d05d322942dd855ed0e52ea094e" address="unix:///run/containerd/s/b07c1c12eeed9dd0c4ff0a644ee347a3a0dbf14f77c677cf2c87988560e1aa44" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:00:59.913719 systemd[1]: Started cri-containerd-1a9f26e7b5143005af3d508c721164fdeedf6d05d322942dd855ed0e52ea094e.scope - libcontainer container 1a9f26e7b5143005af3d508c721164fdeedf6d05d322942dd855ed0e52ea094e. Oct 13 05:01:00.042163 containerd[1982]: time="2025-10-13T05:01:00.042122216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p9228,Uid:75340111-1f4a-4034-883e-b1d34f800692,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a9f26e7b5143005af3d508c721164fdeedf6d05d322942dd855ed0e52ea094e\"" Oct 13 05:01:00.058279 containerd[1982]: time="2025-10-13T05:01:00.058229911Z" level=info msg="CreateContainer within sandbox \"1a9f26e7b5143005af3d508c721164fdeedf6d05d322942dd855ed0e52ea094e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 05:01:00.083086 containerd[1982]: time="2025-10-13T05:01:00.083003623Z" level=info msg="Container 899f1a8bd28ae156020003e69b4baeab92dd9e2ad3c41850f425f364781cd306: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:01:00.091615 systemd-networkd[1564]: vxlan.calico: Link UP Oct 13 05:01:00.091621 systemd-networkd[1564]: vxlan.calico: Gained carrier Oct 13 05:01:00.102517 containerd[1982]: time="2025-10-13T05:01:00.102416855Z" level=info msg="CreateContainer within sandbox \"1a9f26e7b5143005af3d508c721164fdeedf6d05d322942dd855ed0e52ea094e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"899f1a8bd28ae156020003e69b4baeab92dd9e2ad3c41850f425f364781cd306\"" Oct 13 05:01:00.103236 containerd[1982]: time="2025-10-13T05:01:00.103103545Z" level=info msg="StartContainer for \"899f1a8bd28ae156020003e69b4baeab92dd9e2ad3c41850f425f364781cd306\"" Oct 13 05:01:00.105275 containerd[1982]: time="2025-10-13T05:01:00.105251282Z" level=info msg="connecting to shim 899f1a8bd28ae156020003e69b4baeab92dd9e2ad3c41850f425f364781cd306" address="unix:///run/containerd/s/b07c1c12eeed9dd0c4ff0a644ee347a3a0dbf14f77c677cf2c87988560e1aa44" protocol=ttrpc version=3 Oct 13 05:01:00.132625 systemd[1]: Started cri-containerd-899f1a8bd28ae156020003e69b4baeab92dd9e2ad3c41850f425f364781cd306.scope - libcontainer container 899f1a8bd28ae156020003e69b4baeab92dd9e2ad3c41850f425f364781cd306. Oct 13 05:01:00.189020 containerd[1982]: time="2025-10-13T05:01:00.188929427Z" level=info msg="StartContainer for \"899f1a8bd28ae156020003e69b4baeab92dd9e2ad3c41850f425f364781cd306\" returns successfully" Oct 13 05:01:00.359978 containerd[1982]: time="2025-10-13T05:01:00.359670823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-vf59k,Uid:62eb588d-e844-4b88-bdae-4ca98aff9ba0,Namespace:calico-system,Attempt:0,}" Oct 13 05:01:00.360316 containerd[1982]: time="2025-10-13T05:01:00.360279752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5567d8994c-qldmt,Uid:3b6f603c-580d-4f78-9055-03c49a4c91c0,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:01:00.507349 systemd-networkd[1564]: calieebd99d92bd: Link UP Oct 13 05:01:00.508620 systemd-networkd[1564]: calieebd99d92bd: Gained carrier Oct 13 05:01:00.529888 kubelet[3530]: I1013 05:01:00.529765 3530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-p9228" podStartSLOduration=45.52975041 podStartE2EDuration="45.52975041s" podCreationTimestamp="2025-10-13 05:00:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:01:00.522563145 +0000 UTC m=+50.248597788" watchObservedRunningTime="2025-10-13 05:01:00.52975041 +0000 UTC m=+50.255785053" Oct 13 05:01:00.530985 containerd[1982]: 2025-10-13 05:01:00.424 [INFO][5139] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--a--bf8a300537-k8s-calico--apiserver--5567d8994c--qldmt-eth0 calico-apiserver-5567d8994c- calico-apiserver 3b6f603c-580d-4f78-9055-03c49a4c91c0 824 0 2025-10-13 05:00:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5567d8994c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4487.0.0-a-bf8a300537 calico-apiserver-5567d8994c-qldmt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calieebd99d92bd [] [] }} ContainerID="984b7f460a88b1718ae4d4fcec890c4b81a21a0a6bc65a6341f28886f28a91d1" Namespace="calico-apiserver" Pod="calico-apiserver-5567d8994c-qldmt" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-calico--apiserver--5567d8994c--qldmt-" Oct 13 05:01:00.530985 containerd[1982]: 2025-10-13 05:01:00.424 [INFO][5139] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="984b7f460a88b1718ae4d4fcec890c4b81a21a0a6bc65a6341f28886f28a91d1" Namespace="calico-apiserver" Pod="calico-apiserver-5567d8994c-qldmt" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-calico--apiserver--5567d8994c--qldmt-eth0" Oct 13 05:01:00.530985 containerd[1982]: 2025-10-13 05:01:00.452 [INFO][5170] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="984b7f460a88b1718ae4d4fcec890c4b81a21a0a6bc65a6341f28886f28a91d1" HandleID="k8s-pod-network.984b7f460a88b1718ae4d4fcec890c4b81a21a0a6bc65a6341f28886f28a91d1" Workload="ci--4487.0.0--a--bf8a300537-k8s-calico--apiserver--5567d8994c--qldmt-eth0" Oct 13 05:01:00.530985 containerd[1982]: 2025-10-13 05:01:00.452 [INFO][5170] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="984b7f460a88b1718ae4d4fcec890c4b81a21a0a6bc65a6341f28886f28a91d1" HandleID="k8s-pod-network.984b7f460a88b1718ae4d4fcec890c4b81a21a0a6bc65a6341f28886f28a91d1" Workload="ci--4487.0.0--a--bf8a300537-k8s-calico--apiserver--5567d8994c--qldmt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002aadf0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4487.0.0-a-bf8a300537", "pod":"calico-apiserver-5567d8994c-qldmt", "timestamp":"2025-10-13 05:01:00.452231894 +0000 UTC"}, Hostname:"ci-4487.0.0-a-bf8a300537", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:01:00.530985 containerd[1982]: 2025-10-13 05:01:00.452 [INFO][5170] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:01:00.530985 containerd[1982]: 2025-10-13 05:01:00.452 [INFO][5170] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:01:00.530985 containerd[1982]: 2025-10-13 05:01:00.452 [INFO][5170] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-a-bf8a300537' Oct 13 05:01:00.530985 containerd[1982]: 2025-10-13 05:01:00.460 [INFO][5170] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.984b7f460a88b1718ae4d4fcec890c4b81a21a0a6bc65a6341f28886f28a91d1" host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:00.530985 containerd[1982]: 2025-10-13 05:01:00.464 [INFO][5170] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:00.530985 containerd[1982]: 2025-10-13 05:01:00.468 [INFO][5170] ipam/ipam.go 511: Trying affinity for 192.168.120.128/26 host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:00.530985 containerd[1982]: 2025-10-13 05:01:00.470 [INFO][5170] ipam/ipam.go 158: Attempting to load block cidr=192.168.120.128/26 host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:00.530985 containerd[1982]: 2025-10-13 05:01:00.472 [INFO][5170] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.120.128/26 host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:00.530985 containerd[1982]: 2025-10-13 05:01:00.472 [INFO][5170] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.120.128/26 handle="k8s-pod-network.984b7f460a88b1718ae4d4fcec890c4b81a21a0a6bc65a6341f28886f28a91d1" host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:00.530985 containerd[1982]: 2025-10-13 05:01:00.473 [INFO][5170] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.984b7f460a88b1718ae4d4fcec890c4b81a21a0a6bc65a6341f28886f28a91d1 Oct 13 05:01:00.530985 containerd[1982]: 2025-10-13 05:01:00.481 [INFO][5170] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.120.128/26 handle="k8s-pod-network.984b7f460a88b1718ae4d4fcec890c4b81a21a0a6bc65a6341f28886f28a91d1" host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:00.530985 containerd[1982]: 2025-10-13 05:01:00.490 [INFO][5170] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.120.132/26] block=192.168.120.128/26 handle="k8s-pod-network.984b7f460a88b1718ae4d4fcec890c4b81a21a0a6bc65a6341f28886f28a91d1" host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:00.530985 containerd[1982]: 2025-10-13 05:01:00.490 [INFO][5170] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.120.132/26] handle="k8s-pod-network.984b7f460a88b1718ae4d4fcec890c4b81a21a0a6bc65a6341f28886f28a91d1" host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:00.530985 containerd[1982]: 2025-10-13 05:01:00.490 [INFO][5170] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:01:00.530985 containerd[1982]: 2025-10-13 05:01:00.490 [INFO][5170] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.132/26] IPv6=[] ContainerID="984b7f460a88b1718ae4d4fcec890c4b81a21a0a6bc65a6341f28886f28a91d1" HandleID="k8s-pod-network.984b7f460a88b1718ae4d4fcec890c4b81a21a0a6bc65a6341f28886f28a91d1" Workload="ci--4487.0.0--a--bf8a300537-k8s-calico--apiserver--5567d8994c--qldmt-eth0" Oct 13 05:01:00.532214 containerd[1982]: 2025-10-13 05:01:00.494 [INFO][5139] cni-plugin/k8s.go 418: Populated endpoint ContainerID="984b7f460a88b1718ae4d4fcec890c4b81a21a0a6bc65a6341f28886f28a91d1" Namespace="calico-apiserver" Pod="calico-apiserver-5567d8994c-qldmt" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-calico--apiserver--5567d8994c--qldmt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--bf8a300537-k8s-calico--apiserver--5567d8994c--qldmt-eth0", GenerateName:"calico-apiserver-5567d8994c-", Namespace:"calico-apiserver", SelfLink:"", UID:"3b6f603c-580d-4f78-9055-03c49a4c91c0", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 0, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5567d8994c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-bf8a300537", ContainerID:"", Pod:"calico-apiserver-5567d8994c-qldmt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieebd99d92bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:01:00.532214 containerd[1982]: 2025-10-13 05:01:00.495 [INFO][5139] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.132/32] ContainerID="984b7f460a88b1718ae4d4fcec890c4b81a21a0a6bc65a6341f28886f28a91d1" Namespace="calico-apiserver" Pod="calico-apiserver-5567d8994c-qldmt" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-calico--apiserver--5567d8994c--qldmt-eth0" Oct 13 05:01:00.532214 containerd[1982]: 2025-10-13 05:01:00.495 [INFO][5139] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieebd99d92bd ContainerID="984b7f460a88b1718ae4d4fcec890c4b81a21a0a6bc65a6341f28886f28a91d1" Namespace="calico-apiserver" Pod="calico-apiserver-5567d8994c-qldmt" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-calico--apiserver--5567d8994c--qldmt-eth0" Oct 13 05:01:00.532214 containerd[1982]: 2025-10-13 05:01:00.509 [INFO][5139] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="984b7f460a88b1718ae4d4fcec890c4b81a21a0a6bc65a6341f28886f28a91d1" Namespace="calico-apiserver" Pod="calico-apiserver-5567d8994c-qldmt" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-calico--apiserver--5567d8994c--qldmt-eth0" Oct 13 05:01:00.532214 containerd[1982]: 2025-10-13 05:01:00.513 [INFO][5139] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="984b7f460a88b1718ae4d4fcec890c4b81a21a0a6bc65a6341f28886f28a91d1" Namespace="calico-apiserver" Pod="calico-apiserver-5567d8994c-qldmt" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-calico--apiserver--5567d8994c--qldmt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--bf8a300537-k8s-calico--apiserver--5567d8994c--qldmt-eth0", GenerateName:"calico-apiserver-5567d8994c-", Namespace:"calico-apiserver", SelfLink:"", UID:"3b6f603c-580d-4f78-9055-03c49a4c91c0", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 0, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5567d8994c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-bf8a300537", ContainerID:"984b7f460a88b1718ae4d4fcec890c4b81a21a0a6bc65a6341f28886f28a91d1", Pod:"calico-apiserver-5567d8994c-qldmt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieebd99d92bd", MAC:"d2:2d:7b:13:17:75", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:01:00.532214 containerd[1982]: 2025-10-13 05:01:00.526 [INFO][5139] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="984b7f460a88b1718ae4d4fcec890c4b81a21a0a6bc65a6341f28886f28a91d1" Namespace="calico-apiserver" Pod="calico-apiserver-5567d8994c-qldmt" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-calico--apiserver--5567d8994c--qldmt-eth0" Oct 13 05:01:00.592376 containerd[1982]: time="2025-10-13T05:01:00.592329790Z" level=info msg="connecting to shim 984b7f460a88b1718ae4d4fcec890c4b81a21a0a6bc65a6341f28886f28a91d1" address="unix:///run/containerd/s/d906f9987c26a45e6d827d829fd6c9bc95d4a3db0af74f33e20a4c1f156c8a61" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:01:00.619749 systemd[1]: Started cri-containerd-984b7f460a88b1718ae4d4fcec890c4b81a21a0a6bc65a6341f28886f28a91d1.scope - libcontainer container 984b7f460a88b1718ae4d4fcec890c4b81a21a0a6bc65a6341f28886f28a91d1. Oct 13 05:01:00.623777 systemd-networkd[1564]: cali9cf45ed56c4: Link UP Oct 13 05:01:00.624534 systemd-networkd[1564]: cali9cf45ed56c4: Gained carrier Oct 13 05:01:00.645208 containerd[1982]: 2025-10-13 05:01:00.406 [INFO][5125] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--a--bf8a300537-k8s-goldmane--54d579b49d--vf59k-eth0 goldmane-54d579b49d- calico-system 62eb588d-e844-4b88-bdae-4ca98aff9ba0 825 0 2025-10-13 05:00:28 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4487.0.0-a-bf8a300537 goldmane-54d579b49d-vf59k eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali9cf45ed56c4 [] [] }} ContainerID="00113299a0a60fc417250f81e3ea4dea474d36dffe4f9ee7896c416704a2b7c8" Namespace="calico-system" Pod="goldmane-54d579b49d-vf59k" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-goldmane--54d579b49d--vf59k-" Oct 13 05:01:00.645208 containerd[1982]: 2025-10-13 05:01:00.406 [INFO][5125] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="00113299a0a60fc417250f81e3ea4dea474d36dffe4f9ee7896c416704a2b7c8" Namespace="calico-system" Pod="goldmane-54d579b49d-vf59k" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-goldmane--54d579b49d--vf59k-eth0" Oct 13 05:01:00.645208 containerd[1982]: 2025-10-13 05:01:00.455 [INFO][5160] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="00113299a0a60fc417250f81e3ea4dea474d36dffe4f9ee7896c416704a2b7c8" HandleID="k8s-pod-network.00113299a0a60fc417250f81e3ea4dea474d36dffe4f9ee7896c416704a2b7c8" Workload="ci--4487.0.0--a--bf8a300537-k8s-goldmane--54d579b49d--vf59k-eth0" Oct 13 05:01:00.645208 containerd[1982]: 2025-10-13 05:01:00.455 [INFO][5160] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="00113299a0a60fc417250f81e3ea4dea474d36dffe4f9ee7896c416704a2b7c8" HandleID="k8s-pod-network.00113299a0a60fc417250f81e3ea4dea474d36dffe4f9ee7896c416704a2b7c8" Workload="ci--4487.0.0--a--bf8a300537-k8s-goldmane--54d579b49d--vf59k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d6f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487.0.0-a-bf8a300537", "pod":"goldmane-54d579b49d-vf59k", "timestamp":"2025-10-13 05:01:00.455791917 +0000 UTC"}, Hostname:"ci-4487.0.0-a-bf8a300537", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:01:00.645208 containerd[1982]: 2025-10-13 05:01:00.455 [INFO][5160] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:01:00.645208 containerd[1982]: 2025-10-13 05:01:00.490 [INFO][5160] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:01:00.645208 containerd[1982]: 2025-10-13 05:01:00.490 [INFO][5160] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-a-bf8a300537' Oct 13 05:01:00.645208 containerd[1982]: 2025-10-13 05:01:00.560 [INFO][5160] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.00113299a0a60fc417250f81e3ea4dea474d36dffe4f9ee7896c416704a2b7c8" host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:00.645208 containerd[1982]: 2025-10-13 05:01:00.578 [INFO][5160] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:00.645208 containerd[1982]: 2025-10-13 05:01:00.591 [INFO][5160] ipam/ipam.go 511: Trying affinity for 192.168.120.128/26 host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:00.645208 containerd[1982]: 2025-10-13 05:01:00.593 [INFO][5160] ipam/ipam.go 158: Attempting to load block cidr=192.168.120.128/26 host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:00.645208 containerd[1982]: 2025-10-13 05:01:00.595 [INFO][5160] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.120.128/26 host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:00.645208 containerd[1982]: 2025-10-13 05:01:00.595 [INFO][5160] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.120.128/26 handle="k8s-pod-network.00113299a0a60fc417250f81e3ea4dea474d36dffe4f9ee7896c416704a2b7c8" host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:00.645208 containerd[1982]: 2025-10-13 05:01:00.597 [INFO][5160] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.00113299a0a60fc417250f81e3ea4dea474d36dffe4f9ee7896c416704a2b7c8 Oct 13 05:01:00.645208 containerd[1982]: 2025-10-13 05:01:00.602 [INFO][5160] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.120.128/26 handle="k8s-pod-network.00113299a0a60fc417250f81e3ea4dea474d36dffe4f9ee7896c416704a2b7c8" host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:00.645208 containerd[1982]: 2025-10-13 05:01:00.616 [INFO][5160] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.120.133/26] block=192.168.120.128/26 handle="k8s-pod-network.00113299a0a60fc417250f81e3ea4dea474d36dffe4f9ee7896c416704a2b7c8" host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:00.645208 containerd[1982]: 2025-10-13 05:01:00.616 [INFO][5160] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.120.133/26] handle="k8s-pod-network.00113299a0a60fc417250f81e3ea4dea474d36dffe4f9ee7896c416704a2b7c8" host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:00.645208 containerd[1982]: 2025-10-13 05:01:00.616 [INFO][5160] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:01:00.645208 containerd[1982]: 2025-10-13 05:01:00.616 [INFO][5160] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.133/26] IPv6=[] ContainerID="00113299a0a60fc417250f81e3ea4dea474d36dffe4f9ee7896c416704a2b7c8" HandleID="k8s-pod-network.00113299a0a60fc417250f81e3ea4dea474d36dffe4f9ee7896c416704a2b7c8" Workload="ci--4487.0.0--a--bf8a300537-k8s-goldmane--54d579b49d--vf59k-eth0" Oct 13 05:01:00.646346 containerd[1982]: 2025-10-13 05:01:00.618 [INFO][5125] cni-plugin/k8s.go 418: Populated endpoint ContainerID="00113299a0a60fc417250f81e3ea4dea474d36dffe4f9ee7896c416704a2b7c8" Namespace="calico-system" Pod="goldmane-54d579b49d-vf59k" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-goldmane--54d579b49d--vf59k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--bf8a300537-k8s-goldmane--54d579b49d--vf59k-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"62eb588d-e844-4b88-bdae-4ca98aff9ba0", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 0, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-bf8a300537", ContainerID:"", Pod:"goldmane-54d579b49d-vf59k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.120.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9cf45ed56c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:01:00.646346 containerd[1982]: 2025-10-13 05:01:00.618 [INFO][5125] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.133/32] ContainerID="00113299a0a60fc417250f81e3ea4dea474d36dffe4f9ee7896c416704a2b7c8" Namespace="calico-system" Pod="goldmane-54d579b49d-vf59k" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-goldmane--54d579b49d--vf59k-eth0" Oct 13 05:01:00.646346 containerd[1982]: 2025-10-13 05:01:00.618 [INFO][5125] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9cf45ed56c4 ContainerID="00113299a0a60fc417250f81e3ea4dea474d36dffe4f9ee7896c416704a2b7c8" Namespace="calico-system" Pod="goldmane-54d579b49d-vf59k" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-goldmane--54d579b49d--vf59k-eth0" Oct 13 05:01:00.646346 containerd[1982]: 2025-10-13 05:01:00.624 [INFO][5125] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="00113299a0a60fc417250f81e3ea4dea474d36dffe4f9ee7896c416704a2b7c8" Namespace="calico-system" Pod="goldmane-54d579b49d-vf59k" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-goldmane--54d579b49d--vf59k-eth0" Oct 13 05:01:00.646346 containerd[1982]: 2025-10-13 05:01:00.625 [INFO][5125] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="00113299a0a60fc417250f81e3ea4dea474d36dffe4f9ee7896c416704a2b7c8" Namespace="calico-system" Pod="goldmane-54d579b49d-vf59k" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-goldmane--54d579b49d--vf59k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--bf8a300537-k8s-goldmane--54d579b49d--vf59k-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"62eb588d-e844-4b88-bdae-4ca98aff9ba0", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 0, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-bf8a300537", ContainerID:"00113299a0a60fc417250f81e3ea4dea474d36dffe4f9ee7896c416704a2b7c8", Pod:"goldmane-54d579b49d-vf59k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.120.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9cf45ed56c4", MAC:"56:d8:fd:fc:b6:de", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:01:00.646346 containerd[1982]: 2025-10-13 05:01:00.639 [INFO][5125] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="00113299a0a60fc417250f81e3ea4dea474d36dffe4f9ee7896c416704a2b7c8" Namespace="calico-system" Pod="goldmane-54d579b49d-vf59k" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-goldmane--54d579b49d--vf59k-eth0" Oct 13 05:01:00.703456 containerd[1982]: time="2025-10-13T05:01:00.703415508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5567d8994c-qldmt,Uid:3b6f603c-580d-4f78-9055-03c49a4c91c0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"984b7f460a88b1718ae4d4fcec890c4b81a21a0a6bc65a6341f28886f28a91d1\"" Oct 13 05:01:00.722521 containerd[1982]: time="2025-10-13T05:01:00.722471451Z" level=info msg="connecting to shim 00113299a0a60fc417250f81e3ea4dea474d36dffe4f9ee7896c416704a2b7c8" address="unix:///run/containerd/s/e7bf34dbb00be87db0ff09d0892b6cf1ba0d6821ad433ee74d6ffce6c1745dbd" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:01:00.752838 systemd[1]: Started cri-containerd-00113299a0a60fc417250f81e3ea4dea474d36dffe4f9ee7896c416704a2b7c8.scope - libcontainer container 00113299a0a60fc417250f81e3ea4dea474d36dffe4f9ee7896c416704a2b7c8. Oct 13 05:01:00.802812 containerd[1982]: time="2025-10-13T05:01:00.802768649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-vf59k,Uid:62eb588d-e844-4b88-bdae-4ca98aff9ba0,Namespace:calico-system,Attempt:0,} returns sandbox id \"00113299a0a60fc417250f81e3ea4dea474d36dffe4f9ee7896c416704a2b7c8\"" Oct 13 05:01:01.230721 systemd-networkd[1564]: vxlan.calico: Gained IPv6LL Oct 13 05:01:01.294732 systemd-networkd[1564]: cali615d441453a: Gained IPv6LL Oct 13 05:01:01.359044 containerd[1982]: time="2025-10-13T05:01:01.358766346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5567d8994c-289xc,Uid:63beb110-82e9-4863-b11d-6264a263f694,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:01:01.708775 systemd-networkd[1564]: cali2876d85904e: Link UP Oct 13 05:01:01.709683 systemd-networkd[1564]: cali2876d85904e: Gained carrier Oct 13 05:01:01.730751 containerd[1982]: 2025-10-13 05:01:01.631 [INFO][5297] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--a--bf8a300537-k8s-calico--apiserver--5567d8994c--289xc-eth0 calico-apiserver-5567d8994c- calico-apiserver 63beb110-82e9-4863-b11d-6264a263f694 822 0 2025-10-13 05:00:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5567d8994c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4487.0.0-a-bf8a300537 calico-apiserver-5567d8994c-289xc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2876d85904e [] [] }} ContainerID="00315563810d03369fe525e9ea69e2134c05a4829d29079b5e0286d2180c99d8" Namespace="calico-apiserver" Pod="calico-apiserver-5567d8994c-289xc" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-calico--apiserver--5567d8994c--289xc-" Oct 13 05:01:01.730751 containerd[1982]: 2025-10-13 05:01:01.631 [INFO][5297] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="00315563810d03369fe525e9ea69e2134c05a4829d29079b5e0286d2180c99d8" Namespace="calico-apiserver" Pod="calico-apiserver-5567d8994c-289xc" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-calico--apiserver--5567d8994c--289xc-eth0" Oct 13 05:01:01.730751 containerd[1982]: 2025-10-13 05:01:01.660 [INFO][5314] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="00315563810d03369fe525e9ea69e2134c05a4829d29079b5e0286d2180c99d8" HandleID="k8s-pod-network.00315563810d03369fe525e9ea69e2134c05a4829d29079b5e0286d2180c99d8" Workload="ci--4487.0.0--a--bf8a300537-k8s-calico--apiserver--5567d8994c--289xc-eth0" Oct 13 05:01:01.730751 containerd[1982]: 2025-10-13 05:01:01.660 [INFO][5314] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="00315563810d03369fe525e9ea69e2134c05a4829d29079b5e0286d2180c99d8" HandleID="k8s-pod-network.00315563810d03369fe525e9ea69e2134c05a4829d29079b5e0286d2180c99d8" Workload="ci--4487.0.0--a--bf8a300537-k8s-calico--apiserver--5567d8994c--289xc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024aff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4487.0.0-a-bf8a300537", "pod":"calico-apiserver-5567d8994c-289xc", "timestamp":"2025-10-13 05:01:01.660110912 +0000 UTC"}, Hostname:"ci-4487.0.0-a-bf8a300537", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:01:01.730751 containerd[1982]: 2025-10-13 05:01:01.660 [INFO][5314] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:01:01.730751 containerd[1982]: 2025-10-13 05:01:01.660 [INFO][5314] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:01:01.730751 containerd[1982]: 2025-10-13 05:01:01.660 [INFO][5314] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-a-bf8a300537' Oct 13 05:01:01.730751 containerd[1982]: 2025-10-13 05:01:01.668 [INFO][5314] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.00315563810d03369fe525e9ea69e2134c05a4829d29079b5e0286d2180c99d8" host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:01.730751 containerd[1982]: 2025-10-13 05:01:01.673 [INFO][5314] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:01.730751 containerd[1982]: 2025-10-13 05:01:01.677 [INFO][5314] ipam/ipam.go 511: Trying affinity for 192.168.120.128/26 host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:01.730751 containerd[1982]: 2025-10-13 05:01:01.680 [INFO][5314] ipam/ipam.go 158: Attempting to load block cidr=192.168.120.128/26 host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:01.730751 containerd[1982]: 2025-10-13 05:01:01.683 [INFO][5314] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.120.128/26 host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:01.730751 containerd[1982]: 2025-10-13 05:01:01.683 [INFO][5314] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.120.128/26 handle="k8s-pod-network.00315563810d03369fe525e9ea69e2134c05a4829d29079b5e0286d2180c99d8" host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:01.730751 containerd[1982]: 2025-10-13 05:01:01.685 [INFO][5314] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.00315563810d03369fe525e9ea69e2134c05a4829d29079b5e0286d2180c99d8 Oct 13 05:01:01.730751 containerd[1982]: 2025-10-13 05:01:01.690 [INFO][5314] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.120.128/26 handle="k8s-pod-network.00315563810d03369fe525e9ea69e2134c05a4829d29079b5e0286d2180c99d8" host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:01.730751 containerd[1982]: 2025-10-13 05:01:01.699 [INFO][5314] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.120.134/26] block=192.168.120.128/26 handle="k8s-pod-network.00315563810d03369fe525e9ea69e2134c05a4829d29079b5e0286d2180c99d8" host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:01.730751 containerd[1982]: 2025-10-13 05:01:01.700 [INFO][5314] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.120.134/26] handle="k8s-pod-network.00315563810d03369fe525e9ea69e2134c05a4829d29079b5e0286d2180c99d8" host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:01.730751 containerd[1982]: 2025-10-13 05:01:01.700 [INFO][5314] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:01:01.730751 containerd[1982]: 2025-10-13 05:01:01.700 [INFO][5314] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.134/26] IPv6=[] ContainerID="00315563810d03369fe525e9ea69e2134c05a4829d29079b5e0286d2180c99d8" HandleID="k8s-pod-network.00315563810d03369fe525e9ea69e2134c05a4829d29079b5e0286d2180c99d8" Workload="ci--4487.0.0--a--bf8a300537-k8s-calico--apiserver--5567d8994c--289xc-eth0" Oct 13 05:01:01.731170 containerd[1982]: 2025-10-13 05:01:01.703 [INFO][5297] cni-plugin/k8s.go 418: Populated endpoint ContainerID="00315563810d03369fe525e9ea69e2134c05a4829d29079b5e0286d2180c99d8" Namespace="calico-apiserver" Pod="calico-apiserver-5567d8994c-289xc" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-calico--apiserver--5567d8994c--289xc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--bf8a300537-k8s-calico--apiserver--5567d8994c--289xc-eth0", GenerateName:"calico-apiserver-5567d8994c-", Namespace:"calico-apiserver", SelfLink:"", UID:"63beb110-82e9-4863-b11d-6264a263f694", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 0, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5567d8994c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-bf8a300537", ContainerID:"", Pod:"calico-apiserver-5567d8994c-289xc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2876d85904e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:01:01.731170 containerd[1982]: 2025-10-13 05:01:01.703 [INFO][5297] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.134/32] ContainerID="00315563810d03369fe525e9ea69e2134c05a4829d29079b5e0286d2180c99d8" Namespace="calico-apiserver" Pod="calico-apiserver-5567d8994c-289xc" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-calico--apiserver--5567d8994c--289xc-eth0" Oct 13 05:01:01.731170 containerd[1982]: 2025-10-13 05:01:01.703 [INFO][5297] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2876d85904e ContainerID="00315563810d03369fe525e9ea69e2134c05a4829d29079b5e0286d2180c99d8" Namespace="calico-apiserver" Pod="calico-apiserver-5567d8994c-289xc" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-calico--apiserver--5567d8994c--289xc-eth0" Oct 13 05:01:01.731170 containerd[1982]: 2025-10-13 05:01:01.708 [INFO][5297] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="00315563810d03369fe525e9ea69e2134c05a4829d29079b5e0286d2180c99d8" Namespace="calico-apiserver" Pod="calico-apiserver-5567d8994c-289xc" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-calico--apiserver--5567d8994c--289xc-eth0" Oct 13 05:01:01.731170 containerd[1982]: 2025-10-13 05:01:01.710 [INFO][5297] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="00315563810d03369fe525e9ea69e2134c05a4829d29079b5e0286d2180c99d8" Namespace="calico-apiserver" Pod="calico-apiserver-5567d8994c-289xc" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-calico--apiserver--5567d8994c--289xc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--bf8a300537-k8s-calico--apiserver--5567d8994c--289xc-eth0", GenerateName:"calico-apiserver-5567d8994c-", Namespace:"calico-apiserver", SelfLink:"", UID:"63beb110-82e9-4863-b11d-6264a263f694", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 0, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5567d8994c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-bf8a300537", ContainerID:"00315563810d03369fe525e9ea69e2134c05a4829d29079b5e0286d2180c99d8", Pod:"calico-apiserver-5567d8994c-289xc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2876d85904e", MAC:"46:84:a2:fe:ea:28", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:01:01.731170 containerd[1982]: 2025-10-13 05:01:01.725 [INFO][5297] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="00315563810d03369fe525e9ea69e2134c05a4829d29079b5e0286d2180c99d8" Namespace="calico-apiserver" Pod="calico-apiserver-5567d8994c-289xc" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-calico--apiserver--5567d8994c--289xc-eth0" Oct 13 05:01:01.780669 containerd[1982]: time="2025-10-13T05:01:01.780625755Z" level=info msg="connecting to shim 00315563810d03369fe525e9ea69e2134c05a4829d29079b5e0286d2180c99d8" address="unix:///run/containerd/s/1386a159a129a2080d3102e9323f71ee6e0416494ecafc19725c1daf68e9b11d" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:01:01.805656 systemd[1]: Started cri-containerd-00315563810d03369fe525e9ea69e2134c05a4829d29079b5e0286d2180c99d8.scope - libcontainer container 00315563810d03369fe525e9ea69e2134c05a4829d29079b5e0286d2180c99d8. Oct 13 05:01:01.857747 containerd[1982]: time="2025-10-13T05:01:01.857706699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5567d8994c-289xc,Uid:63beb110-82e9-4863-b11d-6264a263f694,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"00315563810d03369fe525e9ea69e2134c05a4829d29079b5e0286d2180c99d8\"" Oct 13 05:01:02.192349 containerd[1982]: time="2025-10-13T05:01:02.192305211Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:01:02.198002 containerd[1982]: time="2025-10-13T05:01:02.197970234Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Oct 13 05:01:02.202775 containerd[1982]: time="2025-10-13T05:01:02.202732746Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:01:02.208350 containerd[1982]: time="2025-10-13T05:01:02.208306367Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:01:02.208881 containerd[1982]: time="2025-10-13T05:01:02.208697130Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 3.468965965s" Oct 13 05:01:02.208881 containerd[1982]: time="2025-10-13T05:01:02.208726706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Oct 13 05:01:02.210214 containerd[1982]: time="2025-10-13T05:01:02.210185194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Oct 13 05:01:02.224811 containerd[1982]: time="2025-10-13T05:01:02.224785561Z" level=info msg="CreateContainer within sandbox \"5cc5b114a057a3881ad98ac3b0e035905af6128fe2d6743bca357b8f40037312\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 13 05:01:02.247156 containerd[1982]: time="2025-10-13T05:01:02.247091062Z" level=info msg="Container cebb2b01adc358c979d8f8331b4b08ce698bf710ebfa18683dfae832cfc42b58: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:01:02.249160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1420344010.mount: Deactivated successfully. Oct 13 05:01:02.254591 systemd-networkd[1564]: calieebd99d92bd: Gained IPv6LL Oct 13 05:01:02.272841 containerd[1982]: time="2025-10-13T05:01:02.272801390Z" level=info msg="CreateContainer within sandbox \"5cc5b114a057a3881ad98ac3b0e035905af6128fe2d6743bca357b8f40037312\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"cebb2b01adc358c979d8f8331b4b08ce698bf710ebfa18683dfae832cfc42b58\"" Oct 13 05:01:02.273401 containerd[1982]: time="2025-10-13T05:01:02.273376862Z" level=info msg="StartContainer for \"cebb2b01adc358c979d8f8331b4b08ce698bf710ebfa18683dfae832cfc42b58\"" Oct 13 05:01:02.274523 containerd[1982]: time="2025-10-13T05:01:02.274499068Z" level=info msg="connecting to shim cebb2b01adc358c979d8f8331b4b08ce698bf710ebfa18683dfae832cfc42b58" address="unix:///run/containerd/s/6bb9438cd072556cb3c2a0eab83b9bc6b008529c9ffb7659b9212c80cdaf9d9c" protocol=ttrpc version=3 Oct 13 05:01:02.293611 systemd[1]: Started cri-containerd-cebb2b01adc358c979d8f8331b4b08ce698bf710ebfa18683dfae832cfc42b58.scope - libcontainer container cebb2b01adc358c979d8f8331b4b08ce698bf710ebfa18683dfae832cfc42b58. Oct 13 05:01:02.320882 systemd-networkd[1564]: cali9cf45ed56c4: Gained IPv6LL Oct 13 05:01:02.337014 containerd[1982]: time="2025-10-13T05:01:02.336978245Z" level=info msg="StartContainer for \"cebb2b01adc358c979d8f8331b4b08ce698bf710ebfa18683dfae832cfc42b58\" returns successfully" Oct 13 05:01:02.359490 containerd[1982]: time="2025-10-13T05:01:02.359420806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2pm2x,Uid:2557da88-1ce8-4cce-bf44-406de3bbb345,Namespace:kube-system,Attempt:0,}" Oct 13 05:01:02.359800 containerd[1982]: time="2025-10-13T05:01:02.359649124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2fn9d,Uid:cc60dacc-3ca6-4b24-9872-16cd8a93e18d,Namespace:calico-system,Attempt:0,}" Oct 13 05:01:02.510613 systemd-networkd[1564]: cali0468841d652: Link UP Oct 13 05:01:02.512103 systemd-networkd[1564]: cali0468841d652: Gained carrier Oct 13 05:01:02.538577 containerd[1982]: 2025-10-13 05:01:02.438 [INFO][5419] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--a--bf8a300537-k8s-coredns--674b8bbfcf--2pm2x-eth0 coredns-674b8bbfcf- kube-system 2557da88-1ce8-4cce-bf44-406de3bbb345 823 0 2025-10-13 05:00:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4487.0.0-a-bf8a300537 coredns-674b8bbfcf-2pm2x eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0468841d652 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d707542343dedff41753fcdac5dd05613c2174dd33d2a434cc3378162f3ce5bd" Namespace="kube-system" Pod="coredns-674b8bbfcf-2pm2x" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-coredns--674b8bbfcf--2pm2x-" Oct 13 05:01:02.538577 containerd[1982]: 2025-10-13 05:01:02.438 [INFO][5419] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d707542343dedff41753fcdac5dd05613c2174dd33d2a434cc3378162f3ce5bd" Namespace="kube-system" Pod="coredns-674b8bbfcf-2pm2x" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-coredns--674b8bbfcf--2pm2x-eth0" Oct 13 05:01:02.538577 containerd[1982]: 2025-10-13 05:01:02.463 [INFO][5446] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d707542343dedff41753fcdac5dd05613c2174dd33d2a434cc3378162f3ce5bd" HandleID="k8s-pod-network.d707542343dedff41753fcdac5dd05613c2174dd33d2a434cc3378162f3ce5bd" Workload="ci--4487.0.0--a--bf8a300537-k8s-coredns--674b8bbfcf--2pm2x-eth0" Oct 13 05:01:02.538577 containerd[1982]: 2025-10-13 05:01:02.464 [INFO][5446] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d707542343dedff41753fcdac5dd05613c2174dd33d2a434cc3378162f3ce5bd" HandleID="k8s-pod-network.d707542343dedff41753fcdac5dd05613c2174dd33d2a434cc3378162f3ce5bd" Workload="ci--4487.0.0--a--bf8a300537-k8s-coredns--674b8bbfcf--2pm2x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024aff0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4487.0.0-a-bf8a300537", "pod":"coredns-674b8bbfcf-2pm2x", "timestamp":"2025-10-13 05:01:02.463903572 +0000 UTC"}, Hostname:"ci-4487.0.0-a-bf8a300537", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:01:02.538577 containerd[1982]: 2025-10-13 05:01:02.464 [INFO][5446] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:01:02.538577 containerd[1982]: 2025-10-13 05:01:02.464 [INFO][5446] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:01:02.538577 containerd[1982]: 2025-10-13 05:01:02.464 [INFO][5446] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-a-bf8a300537' Oct 13 05:01:02.538577 containerd[1982]: 2025-10-13 05:01:02.473 [INFO][5446] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d707542343dedff41753fcdac5dd05613c2174dd33d2a434cc3378162f3ce5bd" host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:02.538577 containerd[1982]: 2025-10-13 05:01:02.478 [INFO][5446] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:02.538577 containerd[1982]: 2025-10-13 05:01:02.481 [INFO][5446] ipam/ipam.go 511: Trying affinity for 192.168.120.128/26 host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:02.538577 containerd[1982]: 2025-10-13 05:01:02.483 [INFO][5446] ipam/ipam.go 158: Attempting to load block cidr=192.168.120.128/26 host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:02.538577 containerd[1982]: 2025-10-13 05:01:02.485 [INFO][5446] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.120.128/26 host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:02.538577 containerd[1982]: 2025-10-13 05:01:02.485 [INFO][5446] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.120.128/26 handle="k8s-pod-network.d707542343dedff41753fcdac5dd05613c2174dd33d2a434cc3378162f3ce5bd" host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:02.538577 containerd[1982]: 2025-10-13 05:01:02.486 [INFO][5446] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d707542343dedff41753fcdac5dd05613c2174dd33d2a434cc3378162f3ce5bd Oct 13 05:01:02.538577 containerd[1982]: 2025-10-13 05:01:02.493 [INFO][5446] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.120.128/26 handle="k8s-pod-network.d707542343dedff41753fcdac5dd05613c2174dd33d2a434cc3378162f3ce5bd" host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:02.538577 containerd[1982]: 2025-10-13 05:01:02.499 [INFO][5446] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.120.135/26] block=192.168.120.128/26 handle="k8s-pod-network.d707542343dedff41753fcdac5dd05613c2174dd33d2a434cc3378162f3ce5bd" host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:02.538577 containerd[1982]: 2025-10-13 05:01:02.499 [INFO][5446] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.120.135/26] handle="k8s-pod-network.d707542343dedff41753fcdac5dd05613c2174dd33d2a434cc3378162f3ce5bd" host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:02.538577 containerd[1982]: 2025-10-13 05:01:02.499 [INFO][5446] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:01:02.538577 containerd[1982]: 2025-10-13 05:01:02.499 [INFO][5446] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.135/26] IPv6=[] ContainerID="d707542343dedff41753fcdac5dd05613c2174dd33d2a434cc3378162f3ce5bd" HandleID="k8s-pod-network.d707542343dedff41753fcdac5dd05613c2174dd33d2a434cc3378162f3ce5bd" Workload="ci--4487.0.0--a--bf8a300537-k8s-coredns--674b8bbfcf--2pm2x-eth0" Oct 13 05:01:02.539029 containerd[1982]: 2025-10-13 05:01:02.502 [INFO][5419] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d707542343dedff41753fcdac5dd05613c2174dd33d2a434cc3378162f3ce5bd" Namespace="kube-system" Pod="coredns-674b8bbfcf-2pm2x" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-coredns--674b8bbfcf--2pm2x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--bf8a300537-k8s-coredns--674b8bbfcf--2pm2x-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2557da88-1ce8-4cce-bf44-406de3bbb345", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 0, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-bf8a300537", ContainerID:"", Pod:"coredns-674b8bbfcf-2pm2x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0468841d652", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:01:02.539029 containerd[1982]: 2025-10-13 05:01:02.503 [INFO][5419] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.135/32] ContainerID="d707542343dedff41753fcdac5dd05613c2174dd33d2a434cc3378162f3ce5bd" Namespace="kube-system" Pod="coredns-674b8bbfcf-2pm2x" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-coredns--674b8bbfcf--2pm2x-eth0" Oct 13 05:01:02.539029 containerd[1982]: 2025-10-13 05:01:02.503 [INFO][5419] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0468841d652 ContainerID="d707542343dedff41753fcdac5dd05613c2174dd33d2a434cc3378162f3ce5bd" Namespace="kube-system" Pod="coredns-674b8bbfcf-2pm2x" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-coredns--674b8bbfcf--2pm2x-eth0" Oct 13 05:01:02.539029 containerd[1982]: 2025-10-13 05:01:02.512 [INFO][5419] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d707542343dedff41753fcdac5dd05613c2174dd33d2a434cc3378162f3ce5bd" Namespace="kube-system" Pod="coredns-674b8bbfcf-2pm2x" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-coredns--674b8bbfcf--2pm2x-eth0" Oct 13 05:01:02.539029 containerd[1982]: 2025-10-13 05:01:02.513 [INFO][5419] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d707542343dedff41753fcdac5dd05613c2174dd33d2a434cc3378162f3ce5bd" Namespace="kube-system" Pod="coredns-674b8bbfcf-2pm2x" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-coredns--674b8bbfcf--2pm2x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--bf8a300537-k8s-coredns--674b8bbfcf--2pm2x-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2557da88-1ce8-4cce-bf44-406de3bbb345", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 0, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-bf8a300537", ContainerID:"d707542343dedff41753fcdac5dd05613c2174dd33d2a434cc3378162f3ce5bd", Pod:"coredns-674b8bbfcf-2pm2x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0468841d652", MAC:"f2:5b:c2:4f:b6:1f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:01:02.539029 containerd[1982]: 2025-10-13 05:01:02.529 [INFO][5419] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d707542343dedff41753fcdac5dd05613c2174dd33d2a434cc3378162f3ce5bd" Namespace="kube-system" Pod="coredns-674b8bbfcf-2pm2x" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-coredns--674b8bbfcf--2pm2x-eth0" Oct 13 05:01:02.572155 containerd[1982]: time="2025-10-13T05:01:02.571935129Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cebb2b01adc358c979d8f8331b4b08ce698bf710ebfa18683dfae832cfc42b58\" id:\"22d6c0e40660aa6ff49905320e7f45b81764b6cda83087be47b299d43f674355\" pid:5479 exited_at:{seconds:1760331662 nanos:571531438}" Oct 13 05:01:02.591433 kubelet[3530]: I1013 05:01:02.590803 3530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6586bcd76c-whz9f" podStartSLOduration=28.969999376 podStartE2EDuration="33.590787729s" podCreationTimestamp="2025-10-13 05:00:29 +0000 UTC" firstStartedPulling="2025-10-13 05:00:57.588633652 +0000 UTC m=+47.314668303" lastFinishedPulling="2025-10-13 05:01:02.209422013 +0000 UTC m=+51.935456656" observedRunningTime="2025-10-13 05:01:02.551526766 +0000 UTC m=+52.277561481" watchObservedRunningTime="2025-10-13 05:01:02.590787729 +0000 UTC m=+52.316822372" Oct 13 05:01:02.641083 systemd-networkd[1564]: cali4115d814c77: Link UP Oct 13 05:01:02.641822 systemd-networkd[1564]: cali4115d814c77: Gained carrier Oct 13 05:01:02.659403 containerd[1982]: 2025-10-13 05:01:02.437 [INFO][5415] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.0--a--bf8a300537-k8s-csi--node--driver--2fn9d-eth0 csi-node-driver- calico-system cc60dacc-3ca6-4b24-9872-16cd8a93e18d 697 0 2025-10-13 05:00:28 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4487.0.0-a-bf8a300537 csi-node-driver-2fn9d eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4115d814c77 [] [] }} ContainerID="e2f08dc5dcbcc7c3d604c5fded1bee956eb711544188a0a00f7a9718f209bc2c" Namespace="calico-system" Pod="csi-node-driver-2fn9d" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-csi--node--driver--2fn9d-" Oct 13 05:01:02.659403 containerd[1982]: 2025-10-13 05:01:02.438 [INFO][5415] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e2f08dc5dcbcc7c3d604c5fded1bee956eb711544188a0a00f7a9718f209bc2c" Namespace="calico-system" Pod="csi-node-driver-2fn9d" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-csi--node--driver--2fn9d-eth0" Oct 13 05:01:02.659403 containerd[1982]: 2025-10-13 05:01:02.476 [INFO][5444] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e2f08dc5dcbcc7c3d604c5fded1bee956eb711544188a0a00f7a9718f209bc2c" HandleID="k8s-pod-network.e2f08dc5dcbcc7c3d604c5fded1bee956eb711544188a0a00f7a9718f209bc2c" Workload="ci--4487.0.0--a--bf8a300537-k8s-csi--node--driver--2fn9d-eth0" Oct 13 05:01:02.659403 containerd[1982]: 2025-10-13 05:01:02.476 [INFO][5444] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e2f08dc5dcbcc7c3d604c5fded1bee956eb711544188a0a00f7a9718f209bc2c" HandleID="k8s-pod-network.e2f08dc5dcbcc7c3d604c5fded1bee956eb711544188a0a00f7a9718f209bc2c" Workload="ci--4487.0.0--a--bf8a300537-k8s-csi--node--driver--2fn9d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b240), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487.0.0-a-bf8a300537", "pod":"csi-node-driver-2fn9d", "timestamp":"2025-10-13 05:01:02.476385402 +0000 UTC"}, Hostname:"ci-4487.0.0-a-bf8a300537", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:01:02.659403 containerd[1982]: 2025-10-13 05:01:02.476 [INFO][5444] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:01:02.659403 containerd[1982]: 2025-10-13 05:01:02.499 [INFO][5444] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:01:02.659403 containerd[1982]: 2025-10-13 05:01:02.500 [INFO][5444] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.0-a-bf8a300537' Oct 13 05:01:02.659403 containerd[1982]: 2025-10-13 05:01:02.575 [INFO][5444] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e2f08dc5dcbcc7c3d604c5fded1bee956eb711544188a0a00f7a9718f209bc2c" host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:02.659403 containerd[1982]: 2025-10-13 05:01:02.582 [INFO][5444] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:02.659403 containerd[1982]: 2025-10-13 05:01:02.594 [INFO][5444] ipam/ipam.go 511: Trying affinity for 192.168.120.128/26 host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:02.659403 containerd[1982]: 2025-10-13 05:01:02.596 [INFO][5444] ipam/ipam.go 158: Attempting to load block cidr=192.168.120.128/26 host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:02.659403 containerd[1982]: 2025-10-13 05:01:02.601 [INFO][5444] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.120.128/26 host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:02.659403 containerd[1982]: 2025-10-13 05:01:02.601 [INFO][5444] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.120.128/26 handle="k8s-pod-network.e2f08dc5dcbcc7c3d604c5fded1bee956eb711544188a0a00f7a9718f209bc2c" host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:02.659403 containerd[1982]: 2025-10-13 05:01:02.603 [INFO][5444] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e2f08dc5dcbcc7c3d604c5fded1bee956eb711544188a0a00f7a9718f209bc2c Oct 13 05:01:02.659403 containerd[1982]: 2025-10-13 05:01:02.611 [INFO][5444] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.120.128/26 handle="k8s-pod-network.e2f08dc5dcbcc7c3d604c5fded1bee956eb711544188a0a00f7a9718f209bc2c" host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:02.659403 containerd[1982]: 2025-10-13 05:01:02.631 [INFO][5444] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.120.136/26] block=192.168.120.128/26 handle="k8s-pod-network.e2f08dc5dcbcc7c3d604c5fded1bee956eb711544188a0a00f7a9718f209bc2c" host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:02.659403 containerd[1982]: 2025-10-13 05:01:02.632 [INFO][5444] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.120.136/26] handle="k8s-pod-network.e2f08dc5dcbcc7c3d604c5fded1bee956eb711544188a0a00f7a9718f209bc2c" host="ci-4487.0.0-a-bf8a300537" Oct 13 05:01:02.659403 containerd[1982]: 2025-10-13 05:01:02.632 [INFO][5444] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:01:02.659403 containerd[1982]: 2025-10-13 05:01:02.632 [INFO][5444] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.136/26] IPv6=[] ContainerID="e2f08dc5dcbcc7c3d604c5fded1bee956eb711544188a0a00f7a9718f209bc2c" HandleID="k8s-pod-network.e2f08dc5dcbcc7c3d604c5fded1bee956eb711544188a0a00f7a9718f209bc2c" Workload="ci--4487.0.0--a--bf8a300537-k8s-csi--node--driver--2fn9d-eth0" Oct 13 05:01:02.660372 containerd[1982]: 2025-10-13 05:01:02.634 [INFO][5415] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e2f08dc5dcbcc7c3d604c5fded1bee956eb711544188a0a00f7a9718f209bc2c" Namespace="calico-system" Pod="csi-node-driver-2fn9d" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-csi--node--driver--2fn9d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--bf8a300537-k8s-csi--node--driver--2fn9d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cc60dacc-3ca6-4b24-9872-16cd8a93e18d", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 0, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-bf8a300537", ContainerID:"", Pod:"csi-node-driver-2fn9d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.120.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4115d814c77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:01:02.660372 containerd[1982]: 2025-10-13 05:01:02.635 [INFO][5415] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.136/32] ContainerID="e2f08dc5dcbcc7c3d604c5fded1bee956eb711544188a0a00f7a9718f209bc2c" Namespace="calico-system" Pod="csi-node-driver-2fn9d" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-csi--node--driver--2fn9d-eth0" Oct 13 05:01:02.660372 containerd[1982]: 2025-10-13 05:01:02.635 [INFO][5415] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4115d814c77 ContainerID="e2f08dc5dcbcc7c3d604c5fded1bee956eb711544188a0a00f7a9718f209bc2c" Namespace="calico-system" Pod="csi-node-driver-2fn9d" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-csi--node--driver--2fn9d-eth0" Oct 13 05:01:02.660372 containerd[1982]: 2025-10-13 05:01:02.641 [INFO][5415] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e2f08dc5dcbcc7c3d604c5fded1bee956eb711544188a0a00f7a9718f209bc2c" Namespace="calico-system" Pod="csi-node-driver-2fn9d" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-csi--node--driver--2fn9d-eth0" Oct 13 05:01:02.660372 containerd[1982]: 2025-10-13 05:01:02.643 [INFO][5415] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e2f08dc5dcbcc7c3d604c5fded1bee956eb711544188a0a00f7a9718f209bc2c" Namespace="calico-system" Pod="csi-node-driver-2fn9d" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-csi--node--driver--2fn9d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.0--a--bf8a300537-k8s-csi--node--driver--2fn9d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cc60dacc-3ca6-4b24-9872-16cd8a93e18d", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 0, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.0-a-bf8a300537", ContainerID:"e2f08dc5dcbcc7c3d604c5fded1bee956eb711544188a0a00f7a9718f209bc2c", Pod:"csi-node-driver-2fn9d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.120.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4115d814c77", MAC:"fe:b8:b0:ec:89:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:01:02.660372 containerd[1982]: 2025-10-13 05:01:02.656 [INFO][5415] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e2f08dc5dcbcc7c3d604c5fded1bee956eb711544188a0a00f7a9718f209bc2c" Namespace="calico-system" Pod="csi-node-driver-2fn9d" WorkloadEndpoint="ci--4487.0.0--a--bf8a300537-k8s-csi--node--driver--2fn9d-eth0" Oct 13 05:01:02.702556 containerd[1982]: time="2025-10-13T05:01:02.702518313Z" level=info msg="connecting to shim d707542343dedff41753fcdac5dd05613c2174dd33d2a434cc3378162f3ce5bd" address="unix:///run/containerd/s/3b78da474b2d64250aad76a8c7a5f5a44c297924897028632db14ab4ceb24c26" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:01:02.720621 systemd[1]: Started cri-containerd-d707542343dedff41753fcdac5dd05613c2174dd33d2a434cc3378162f3ce5bd.scope - libcontainer container d707542343dedff41753fcdac5dd05613c2174dd33d2a434cc3378162f3ce5bd. Oct 13 05:01:02.722240 containerd[1982]: time="2025-10-13T05:01:02.722049684Z" level=info msg="connecting to shim e2f08dc5dcbcc7c3d604c5fded1bee956eb711544188a0a00f7a9718f209bc2c" address="unix:///run/containerd/s/b87f4447e5242399b092a36141ba9a46bd2cc74120ff7962d0c861f92c7f42b2" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:01:02.743611 systemd[1]: Started cri-containerd-e2f08dc5dcbcc7c3d604c5fded1bee956eb711544188a0a00f7a9718f209bc2c.scope - libcontainer container e2f08dc5dcbcc7c3d604c5fded1bee956eb711544188a0a00f7a9718f209bc2c. Oct 13 05:01:02.764545 containerd[1982]: time="2025-10-13T05:01:02.764330297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2pm2x,Uid:2557da88-1ce8-4cce-bf44-406de3bbb345,Namespace:kube-system,Attempt:0,} returns sandbox id \"d707542343dedff41753fcdac5dd05613c2174dd33d2a434cc3378162f3ce5bd\"" Oct 13 05:01:02.773530 containerd[1982]: time="2025-10-13T05:01:02.773400091Z" level=info msg="CreateContainer within sandbox \"d707542343dedff41753fcdac5dd05613c2174dd33d2a434cc3378162f3ce5bd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 05:01:02.781755 containerd[1982]: time="2025-10-13T05:01:02.781717562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2fn9d,Uid:cc60dacc-3ca6-4b24-9872-16cd8a93e18d,Namespace:calico-system,Attempt:0,} returns sandbox id \"e2f08dc5dcbcc7c3d604c5fded1bee956eb711544188a0a00f7a9718f209bc2c\"" Oct 13 05:01:02.797495 containerd[1982]: time="2025-10-13T05:01:02.797232762Z" level=info msg="Container c73bedd44e6b97b8fa80f66ddfabf99fedbbb4c4d03822859fa16fda67f4d538: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:01:02.814987 containerd[1982]: time="2025-10-13T05:01:02.814951092Z" level=info msg="CreateContainer within sandbox \"d707542343dedff41753fcdac5dd05613c2174dd33d2a434cc3378162f3ce5bd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c73bedd44e6b97b8fa80f66ddfabf99fedbbb4c4d03822859fa16fda67f4d538\"" Oct 13 05:01:02.816132 containerd[1982]: time="2025-10-13T05:01:02.816041457Z" level=info msg="StartContainer for \"c73bedd44e6b97b8fa80f66ddfabf99fedbbb4c4d03822859fa16fda67f4d538\"" Oct 13 05:01:02.817926 containerd[1982]: time="2025-10-13T05:01:02.817857338Z" level=info msg="connecting to shim c73bedd44e6b97b8fa80f66ddfabf99fedbbb4c4d03822859fa16fda67f4d538" address="unix:///run/containerd/s/3b78da474b2d64250aad76a8c7a5f5a44c297924897028632db14ab4ceb24c26" protocol=ttrpc version=3 Oct 13 05:01:02.835631 systemd[1]: Started cri-containerd-c73bedd44e6b97b8fa80f66ddfabf99fedbbb4c4d03822859fa16fda67f4d538.scope - libcontainer container c73bedd44e6b97b8fa80f66ddfabf99fedbbb4c4d03822859fa16fda67f4d538. Oct 13 05:01:02.863556 containerd[1982]: time="2025-10-13T05:01:02.863523105Z" level=info msg="StartContainer for \"c73bedd44e6b97b8fa80f66ddfabf99fedbbb4c4d03822859fa16fda67f4d538\" returns successfully" Oct 13 05:01:03.534683 systemd-networkd[1564]: cali2876d85904e: Gained IPv6LL Oct 13 05:01:03.555045 kubelet[3530]: I1013 05:01:03.554856 3530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-2pm2x" podStartSLOduration=48.554843546 podStartE2EDuration="48.554843546s" podCreationTimestamp="2025-10-13 05:00:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:01:03.540419848 +0000 UTC m=+53.266454491" watchObservedRunningTime="2025-10-13 05:01:03.554843546 +0000 UTC m=+53.280878189" Oct 13 05:01:04.302684 systemd-networkd[1564]: cali4115d814c77: Gained IPv6LL Oct 13 05:01:04.558616 systemd-networkd[1564]: cali0468841d652: Gained IPv6LL Oct 13 05:01:08.542093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3316921435.mount: Deactivated successfully. Oct 13 05:01:08.598705 containerd[1982]: time="2025-10-13T05:01:08.598586517Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:01:08.601517 containerd[1982]: time="2025-10-13T05:01:08.601487894Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Oct 13 05:01:08.605001 containerd[1982]: time="2025-10-13T05:01:08.604957719Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:01:08.611121 containerd[1982]: time="2025-10-13T05:01:08.611078106Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:01:08.611625 containerd[1982]: time="2025-10-13T05:01:08.611394659Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 6.401181809s" Oct 13 05:01:08.611625 containerd[1982]: time="2025-10-13T05:01:08.611427412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Oct 13 05:01:08.612632 containerd[1982]: time="2025-10-13T05:01:08.612615749Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 05:01:08.620968 containerd[1982]: time="2025-10-13T05:01:08.620787041Z" level=info msg="CreateContainer within sandbox \"f36a307cbc9f4952e79c13856db8e66ba8c7517052733d2ce37325c78f8196f9\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Oct 13 05:01:08.640617 containerd[1982]: time="2025-10-13T05:01:08.640576690Z" level=info msg="Container 3747538362957edc16a717775f360dd9f9ed70dfd18490250d1c705bd550d065: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:01:08.658991 containerd[1982]: time="2025-10-13T05:01:08.658862265Z" level=info msg="CreateContainer within sandbox \"f36a307cbc9f4952e79c13856db8e66ba8c7517052733d2ce37325c78f8196f9\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"3747538362957edc16a717775f360dd9f9ed70dfd18490250d1c705bd550d065\"" Oct 13 05:01:08.660732 containerd[1982]: time="2025-10-13T05:01:08.659750961Z" level=info msg="StartContainer for \"3747538362957edc16a717775f360dd9f9ed70dfd18490250d1c705bd550d065\"" Oct 13 05:01:08.660732 containerd[1982]: time="2025-10-13T05:01:08.660580545Z" level=info msg="connecting to shim 3747538362957edc16a717775f360dd9f9ed70dfd18490250d1c705bd550d065" address="unix:///run/containerd/s/f5256cc884c29770796b7a0baeac41117608bb4a94558c6c73a27fbb14723922" protocol=ttrpc version=3 Oct 13 05:01:08.686624 systemd[1]: Started cri-containerd-3747538362957edc16a717775f360dd9f9ed70dfd18490250d1c705bd550d065.scope - libcontainer container 3747538362957edc16a717775f360dd9f9ed70dfd18490250d1c705bd550d065. Oct 13 05:01:08.723384 containerd[1982]: time="2025-10-13T05:01:08.723324073Z" level=info msg="StartContainer for \"3747538362957edc16a717775f360dd9f9ed70dfd18490250d1c705bd550d065\" returns successfully" Oct 13 05:01:09.557202 kubelet[3530]: I1013 05:01:09.557130 3530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6cf45b76dd-ndrgh" podStartSLOduration=2.171748084 podStartE2EDuration="13.551029519s" podCreationTimestamp="2025-10-13 05:00:56 +0000 UTC" firstStartedPulling="2025-10-13 05:00:57.233050298 +0000 UTC m=+46.959084941" lastFinishedPulling="2025-10-13 05:01:08.612331733 +0000 UTC m=+58.338366376" observedRunningTime="2025-10-13 05:01:09.550149191 +0000 UTC m=+59.276183866" watchObservedRunningTime="2025-10-13 05:01:09.551029519 +0000 UTC m=+59.277064162" Oct 13 05:01:18.911064 containerd[1982]: time="2025-10-13T05:01:18.911008319Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:01:18.913723 containerd[1982]: time="2025-10-13T05:01:18.913537861Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Oct 13 05:01:18.916432 containerd[1982]: time="2025-10-13T05:01:18.916393437Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:01:18.964502 containerd[1982]: time="2025-10-13T05:01:18.964449998Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:01:18.965348 containerd[1982]: time="2025-10-13T05:01:18.965315046Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 10.352601247s" Oct 13 05:01:18.965389 containerd[1982]: time="2025-10-13T05:01:18.965352207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Oct 13 05:01:18.966204 containerd[1982]: time="2025-10-13T05:01:18.966136221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Oct 13 05:01:18.976513 containerd[1982]: time="2025-10-13T05:01:18.976230142Z" level=info msg="CreateContainer within sandbox \"984b7f460a88b1718ae4d4fcec890c4b81a21a0a6bc65a6341f28886f28a91d1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 05:01:19.001112 containerd[1982]: time="2025-10-13T05:01:18.998073206Z" level=info msg="Container 8ff0b5ff15b03bf9cd2d2ae3abe85af2c65e4d4206ce86282d832ec42a70bc40: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:01:19.014329 containerd[1982]: time="2025-10-13T05:01:19.014215887Z" level=info msg="CreateContainer within sandbox \"984b7f460a88b1718ae4d4fcec890c4b81a21a0a6bc65a6341f28886f28a91d1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8ff0b5ff15b03bf9cd2d2ae3abe85af2c65e4d4206ce86282d832ec42a70bc40\"" Oct 13 05:01:19.015237 containerd[1982]: time="2025-10-13T05:01:19.014973004Z" level=info msg="StartContainer for \"8ff0b5ff15b03bf9cd2d2ae3abe85af2c65e4d4206ce86282d832ec42a70bc40\"" Oct 13 05:01:19.016001 containerd[1982]: time="2025-10-13T05:01:19.015980544Z" level=info msg="connecting to shim 8ff0b5ff15b03bf9cd2d2ae3abe85af2c65e4d4206ce86282d832ec42a70bc40" address="unix:///run/containerd/s/d906f9987c26a45e6d827d829fd6c9bc95d4a3db0af74f33e20a4c1f156c8a61" protocol=ttrpc version=3 Oct 13 05:01:19.052614 systemd[1]: Started cri-containerd-8ff0b5ff15b03bf9cd2d2ae3abe85af2c65e4d4206ce86282d832ec42a70bc40.scope - libcontainer container 8ff0b5ff15b03bf9cd2d2ae3abe85af2c65e4d4206ce86282d832ec42a70bc40. Oct 13 05:01:19.087988 containerd[1982]: time="2025-10-13T05:01:19.087948195Z" level=info msg="StartContainer for \"8ff0b5ff15b03bf9cd2d2ae3abe85af2c65e4d4206ce86282d832ec42a70bc40\" returns successfully" Oct 13 05:01:19.580050 kubelet[3530]: I1013 05:01:19.579952 3530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5567d8994c-qldmt" podStartSLOduration=36.320258903 podStartE2EDuration="54.579936198s" podCreationTimestamp="2025-10-13 05:00:25 +0000 UTC" firstStartedPulling="2025-10-13 05:01:00.706386052 +0000 UTC m=+50.432420703" lastFinishedPulling="2025-10-13 05:01:18.966063355 +0000 UTC m=+68.692097998" observedRunningTime="2025-10-13 05:01:19.579875901 +0000 UTC m=+69.305910584" watchObservedRunningTime="2025-10-13 05:01:19.579936198 +0000 UTC m=+69.305970849" Oct 13 05:01:20.564169 kubelet[3530]: I1013 05:01:20.564131 3530 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:01:24.170372 kubelet[3530]: I1013 05:01:24.170262 3530 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:01:25.333197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1679535079.mount: Deactivated successfully. Oct 13 05:01:27.920591 containerd[1982]: time="2025-10-13T05:01:27.920535353Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f26bc2d2c0d9159be10b350eafac89e0842b8ce20fca1b5111b8b1a2f2a7a626\" id:\"254d7838e7eacc5fd144f57c1675fd61195643452ce59414c191ccd87de9093e\" pid:5777 exited_at:{seconds:1760331687 nanos:918846130}" Oct 13 05:01:27.939507 containerd[1982]: time="2025-10-13T05:01:27.938888533Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:01:27.942721 containerd[1982]: time="2025-10-13T05:01:27.942680310Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Oct 13 05:01:27.982458 containerd[1982]: time="2025-10-13T05:01:27.982408800Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:01:27.989395 containerd[1982]: time="2025-10-13T05:01:27.989041664Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:01:27.991433 containerd[1982]: time="2025-10-13T05:01:27.991322823Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 9.025145825s" Oct 13 05:01:27.991433 containerd[1982]: time="2025-10-13T05:01:27.991352456Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Oct 13 05:01:27.992057 containerd[1982]: time="2025-10-13T05:01:27.992040779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 05:01:28.045802 containerd[1982]: time="2025-10-13T05:01:28.045649029Z" level=info msg="CreateContainer within sandbox \"00113299a0a60fc417250f81e3ea4dea474d36dffe4f9ee7896c416704a2b7c8\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Oct 13 05:01:28.139175 containerd[1982]: time="2025-10-13T05:01:28.139112686Z" level=info msg="Container cee149a7801fe55dfaf1f36bc205acde6e5a70e853cefb0018fd5d7631a29082: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:01:28.145137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1018776499.mount: Deactivated successfully. Oct 13 05:01:28.188989 containerd[1982]: time="2025-10-13T05:01:28.188607638Z" level=info msg="CreateContainer within sandbox \"00113299a0a60fc417250f81e3ea4dea474d36dffe4f9ee7896c416704a2b7c8\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"cee149a7801fe55dfaf1f36bc205acde6e5a70e853cefb0018fd5d7631a29082\"" Oct 13 05:01:28.191534 containerd[1982]: time="2025-10-13T05:01:28.189939987Z" level=info msg="StartContainer for \"cee149a7801fe55dfaf1f36bc205acde6e5a70e853cefb0018fd5d7631a29082\"" Oct 13 05:01:28.192288 containerd[1982]: time="2025-10-13T05:01:28.192249043Z" level=info msg="connecting to shim cee149a7801fe55dfaf1f36bc205acde6e5a70e853cefb0018fd5d7631a29082" address="unix:///run/containerd/s/e7bf34dbb00be87db0ff09d0892b6cf1ba0d6821ad433ee74d6ffce6c1745dbd" protocol=ttrpc version=3 Oct 13 05:01:28.213621 systemd[1]: Started cri-containerd-cee149a7801fe55dfaf1f36bc205acde6e5a70e853cefb0018fd5d7631a29082.scope - libcontainer container cee149a7801fe55dfaf1f36bc205acde6e5a70e853cefb0018fd5d7631a29082. Oct 13 05:01:28.247270 containerd[1982]: time="2025-10-13T05:01:28.246548121Z" level=info msg="StartContainer for \"cee149a7801fe55dfaf1f36bc205acde6e5a70e853cefb0018fd5d7631a29082\" returns successfully" Oct 13 05:01:28.444752 containerd[1982]: time="2025-10-13T05:01:28.444624386Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:01:28.486723 containerd[1982]: time="2025-10-13T05:01:28.485905811Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Oct 13 05:01:28.487442 containerd[1982]: time="2025-10-13T05:01:28.487411405Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 495.273304ms" Oct 13 05:01:28.487570 containerd[1982]: time="2025-10-13T05:01:28.487554657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Oct 13 05:01:28.488710 containerd[1982]: time="2025-10-13T05:01:28.488694497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Oct 13 05:01:28.498049 containerd[1982]: time="2025-10-13T05:01:28.498025164Z" level=info msg="CreateContainer within sandbox \"00315563810d03369fe525e9ea69e2134c05a4829d29079b5e0286d2180c99d8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 05:01:28.600034 kubelet[3530]: I1013 05:01:28.599973 3530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-vf59k" podStartSLOduration=33.414878854 podStartE2EDuration="1m0.599956999s" podCreationTimestamp="2025-10-13 05:00:28 +0000 UTC" firstStartedPulling="2025-10-13 05:01:00.80684163 +0000 UTC m=+50.532876273" lastFinishedPulling="2025-10-13 05:01:27.991919775 +0000 UTC m=+77.717954418" observedRunningTime="2025-10-13 05:01:28.599725785 +0000 UTC m=+78.325760428" watchObservedRunningTime="2025-10-13 05:01:28.599956999 +0000 UTC m=+78.325991650" Oct 13 05:01:28.607494 containerd[1982]: time="2025-10-13T05:01:28.606117074Z" level=info msg="Container 2257b12dc2140ab4c710982fe6a6b6498806c6797eae8fe9c438bb5b4eee0910: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:01:28.612106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1294710597.mount: Deactivated successfully. Oct 13 05:01:28.679840 containerd[1982]: time="2025-10-13T05:01:28.679800310Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cee149a7801fe55dfaf1f36bc205acde6e5a70e853cefb0018fd5d7631a29082\" id:\"a8b3a400c2d6c97dcf79df64ccd05923b78f1188fa335616083a1027a574c57c\" pid:5839 exit_status:1 exited_at:{seconds:1760331688 nanos:679385795}" Oct 13 05:01:28.698667 containerd[1982]: time="2025-10-13T05:01:28.698554502Z" level=info msg="CreateContainer within sandbox \"00315563810d03369fe525e9ea69e2134c05a4829d29079b5e0286d2180c99d8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2257b12dc2140ab4c710982fe6a6b6498806c6797eae8fe9c438bb5b4eee0910\"" Oct 13 05:01:28.700177 containerd[1982]: time="2025-10-13T05:01:28.700143210Z" level=info msg="StartContainer for \"2257b12dc2140ab4c710982fe6a6b6498806c6797eae8fe9c438bb5b4eee0910\"" Oct 13 05:01:28.701080 containerd[1982]: time="2025-10-13T05:01:28.701054052Z" level=info msg="connecting to shim 2257b12dc2140ab4c710982fe6a6b6498806c6797eae8fe9c438bb5b4eee0910" address="unix:///run/containerd/s/1386a159a129a2080d3102e9323f71ee6e0416494ecafc19725c1daf68e9b11d" protocol=ttrpc version=3 Oct 13 05:01:28.725646 systemd[1]: Started cri-containerd-2257b12dc2140ab4c710982fe6a6b6498806c6797eae8fe9c438bb5b4eee0910.scope - libcontainer container 2257b12dc2140ab4c710982fe6a6b6498806c6797eae8fe9c438bb5b4eee0910. Oct 13 05:01:28.777870 containerd[1982]: time="2025-10-13T05:01:28.777832718Z" level=info msg="StartContainer for \"2257b12dc2140ab4c710982fe6a6b6498806c6797eae8fe9c438bb5b4eee0910\" returns successfully" Oct 13 05:01:29.620204 kubelet[3530]: I1013 05:01:29.619783 3530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5567d8994c-289xc" podStartSLOduration=37.991093353 podStartE2EDuration="1m4.619766592s" podCreationTimestamp="2025-10-13 05:00:25 +0000 UTC" firstStartedPulling="2025-10-13 05:01:01.859577229 +0000 UTC m=+51.585611880" lastFinishedPulling="2025-10-13 05:01:28.488250468 +0000 UTC m=+78.214285119" observedRunningTime="2025-10-13 05:01:29.615281292 +0000 UTC m=+79.341315935" watchObservedRunningTime="2025-10-13 05:01:29.619766592 +0000 UTC m=+79.345801235" Oct 13 05:01:29.835587 containerd[1982]: time="2025-10-13T05:01:29.835534946Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cee149a7801fe55dfaf1f36bc205acde6e5a70e853cefb0018fd5d7631a29082\" id:\"0f99f27673143cae0d9501794f65af9b76eb20a952b3f543206953f6f7283c6c\" pid:5899 exit_status:1 exited_at:{seconds:1760331689 nanos:824864514}" Oct 13 05:01:30.653231 containerd[1982]: time="2025-10-13T05:01:30.653198435Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cee149a7801fe55dfaf1f36bc205acde6e5a70e853cefb0018fd5d7631a29082\" id:\"d6612aa047a8d6b587af136071486ec731193c324087dd5e5727e2cb6abae2f0\" pid:5929 exited_at:{seconds:1760331690 nanos:652582034}" Oct 13 05:01:32.634260 containerd[1982]: time="2025-10-13T05:01:32.634216755Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cebb2b01adc358c979d8f8331b4b08ce698bf710ebfa18683dfae832cfc42b58\" id:\"875d0c3b6375fce8b0d76022b3028c5f676672ec623f013fb36f1cf9307e7b58\" pid:5952 exited_at:{seconds:1760331692 nanos:633859321}" Oct 13 05:01:39.089333 containerd[1982]: time="2025-10-13T05:01:39.089264178Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:01:39.094705 containerd[1982]: time="2025-10-13T05:01:39.094668675Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Oct 13 05:01:39.096642 containerd[1982]: time="2025-10-13T05:01:39.096605295Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:01:39.103779 containerd[1982]: time="2025-10-13T05:01:39.103658171Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 10.614579856s" Oct 13 05:01:39.103779 containerd[1982]: time="2025-10-13T05:01:39.103685796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Oct 13 05:01:39.112330 containerd[1982]: time="2025-10-13T05:01:39.111679121Z" level=info msg="CreateContainer within sandbox \"e2f08dc5dcbcc7c3d604c5fded1bee956eb711544188a0a00f7a9718f209bc2c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 13 05:01:39.115500 containerd[1982]: time="2025-10-13T05:01:39.115096973Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:01:39.138494 containerd[1982]: time="2025-10-13T05:01:39.138444165Z" level=info msg="Container d06811238e7c0290f68cd54ea9257ac36e8e321c955a8ea63f714a7b7b892814: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:01:39.157674 containerd[1982]: time="2025-10-13T05:01:39.157642638Z" level=info msg="CreateContainer within sandbox \"e2f08dc5dcbcc7c3d604c5fded1bee956eb711544188a0a00f7a9718f209bc2c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d06811238e7c0290f68cd54ea9257ac36e8e321c955a8ea63f714a7b7b892814\"" Oct 13 05:01:39.158678 containerd[1982]: time="2025-10-13T05:01:39.158628520Z" level=info msg="StartContainer for \"d06811238e7c0290f68cd54ea9257ac36e8e321c955a8ea63f714a7b7b892814\"" Oct 13 05:01:39.159635 containerd[1982]: time="2025-10-13T05:01:39.159607274Z" level=info msg="connecting to shim d06811238e7c0290f68cd54ea9257ac36e8e321c955a8ea63f714a7b7b892814" address="unix:///run/containerd/s/b87f4447e5242399b092a36141ba9a46bd2cc74120ff7962d0c861f92c7f42b2" protocol=ttrpc version=3 Oct 13 05:01:39.185007 systemd[1]: Started cri-containerd-d06811238e7c0290f68cd54ea9257ac36e8e321c955a8ea63f714a7b7b892814.scope - libcontainer container d06811238e7c0290f68cd54ea9257ac36e8e321c955a8ea63f714a7b7b892814. Oct 13 05:01:39.226820 containerd[1982]: time="2025-10-13T05:01:39.226782485Z" level=info msg="StartContainer for \"d06811238e7c0290f68cd54ea9257ac36e8e321c955a8ea63f714a7b7b892814\" returns successfully" Oct 13 05:01:39.228142 containerd[1982]: time="2025-10-13T05:01:39.227943076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Oct 13 05:01:43.637776 containerd[1982]: time="2025-10-13T05:01:43.637289303Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:01:43.642196 containerd[1982]: time="2025-10-13T05:01:43.642158041Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Oct 13 05:01:43.646148 containerd[1982]: time="2025-10-13T05:01:43.646068897Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:01:43.650206 containerd[1982]: time="2025-10-13T05:01:43.650156975Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:01:43.650797 containerd[1982]: time="2025-10-13T05:01:43.650564714Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 4.422593956s" Oct 13 05:01:43.650797 containerd[1982]: time="2025-10-13T05:01:43.650594378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Oct 13 05:01:43.657377 containerd[1982]: time="2025-10-13T05:01:43.657355551Z" level=info msg="CreateContainer within sandbox \"e2f08dc5dcbcc7c3d604c5fded1bee956eb711544188a0a00f7a9718f209bc2c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 13 05:01:43.677367 containerd[1982]: time="2025-10-13T05:01:43.677334925Z" level=info msg="Container e7934c880b97d338c353946edd83a192df09a4c9ce49365a5fc4f5d907d370d4: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:01:43.680758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4133403980.mount: Deactivated successfully. Oct 13 05:01:43.703494 containerd[1982]: time="2025-10-13T05:01:43.703238241Z" level=info msg="CreateContainer within sandbox \"e2f08dc5dcbcc7c3d604c5fded1bee956eb711544188a0a00f7a9718f209bc2c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e7934c880b97d338c353946edd83a192df09a4c9ce49365a5fc4f5d907d370d4\"" Oct 13 05:01:43.705828 containerd[1982]: time="2025-10-13T05:01:43.705802726Z" level=info msg="StartContainer for \"e7934c880b97d338c353946edd83a192df09a4c9ce49365a5fc4f5d907d370d4\"" Oct 13 05:01:43.707555 containerd[1982]: time="2025-10-13T05:01:43.707529828Z" level=info msg="connecting to shim e7934c880b97d338c353946edd83a192df09a4c9ce49365a5fc4f5d907d370d4" address="unix:///run/containerd/s/b87f4447e5242399b092a36141ba9a46bd2cc74120ff7962d0c861f92c7f42b2" protocol=ttrpc version=3 Oct 13 05:01:43.732637 systemd[1]: Started cri-containerd-e7934c880b97d338c353946edd83a192df09a4c9ce49365a5fc4f5d907d370d4.scope - libcontainer container e7934c880b97d338c353946edd83a192df09a4c9ce49365a5fc4f5d907d370d4. Oct 13 05:01:43.771319 containerd[1982]: time="2025-10-13T05:01:43.771283907Z" level=info msg="StartContainer for \"e7934c880b97d338c353946edd83a192df09a4c9ce49365a5fc4f5d907d370d4\" returns successfully" Oct 13 05:01:44.469055 kubelet[3530]: I1013 05:01:44.469019 3530 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 13 05:01:44.472098 kubelet[3530]: I1013 05:01:44.472073 3530 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 13 05:01:53.346537 systemd[1]: Started sshd@7-10.200.20.16:22-10.200.16.10:50478.service - OpenSSH per-connection server daemon (10.200.16.10:50478). Oct 13 05:01:53.811896 sshd[6047]: Accepted publickey for core from 10.200.16.10 port 50478 ssh2: RSA SHA256:0u0fQSSzne3hQhq8oltmzZFBGtzHLYbFgWa9+RtcBLA Oct 13 05:01:53.813977 sshd-session[6047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:01:53.818292 systemd-logind[1953]: New session 10 of user core. Oct 13 05:01:53.824809 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 13 05:01:54.183701 sshd[6050]: Connection closed by 10.200.16.10 port 50478 Oct 13 05:01:54.184236 sshd-session[6047]: pam_unix(sshd:session): session closed for user core Oct 13 05:01:54.188161 systemd[1]: sshd@7-10.200.20.16:22-10.200.16.10:50478.service: Deactivated successfully. Oct 13 05:01:54.189865 systemd[1]: session-10.scope: Deactivated successfully. Oct 13 05:01:54.190688 systemd-logind[1953]: Session 10 logged out. Waiting for processes to exit. Oct 13 05:01:54.192348 systemd-logind[1953]: Removed session 10. Oct 13 05:01:56.657595 containerd[1982]: time="2025-10-13T05:01:56.657017719Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f26bc2d2c0d9159be10b350eafac89e0842b8ce20fca1b5111b8b1a2f2a7a626\" id:\"2bbe352eabf39b4cd013573c768047fca621403454b52cddf104e711868527aa\" pid:6073 exited_at:{seconds:1760331716 nanos:656516313}" Oct 13 05:01:59.263698 systemd[1]: Started sshd@8-10.200.20.16:22-10.200.16.10:50490.service - OpenSSH per-connection server daemon (10.200.16.10:50490). Oct 13 05:01:59.704575 sshd[6089]: Accepted publickey for core from 10.200.16.10 port 50490 ssh2: RSA SHA256:0u0fQSSzne3hQhq8oltmzZFBGtzHLYbFgWa9+RtcBLA Oct 13 05:01:59.705949 sshd-session[6089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:01:59.712602 systemd-logind[1953]: New session 11 of user core. Oct 13 05:01:59.716685 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 13 05:02:00.093583 sshd[6092]: Connection closed by 10.200.16.10 port 50490 Oct 13 05:02:00.094410 sshd-session[6089]: pam_unix(sshd:session): session closed for user core Oct 13 05:02:00.099038 systemd[1]: sshd@8-10.200.20.16:22-10.200.16.10:50490.service: Deactivated successfully. Oct 13 05:02:00.104355 systemd[1]: session-11.scope: Deactivated successfully. Oct 13 05:02:00.108242 systemd-logind[1953]: Session 11 logged out. Waiting for processes to exit. Oct 13 05:02:00.109429 systemd-logind[1953]: Removed session 11. Oct 13 05:02:00.650505 containerd[1982]: time="2025-10-13T05:02:00.650382863Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cee149a7801fe55dfaf1f36bc205acde6e5a70e853cefb0018fd5d7631a29082\" id:\"b9f84c3bf3b8bf694c807d3a09cb77368cff1fc9f963ba98b2fea7e04b69331d\" pid:6115 exited_at:{seconds:1760331720 nanos:649729301}" Oct 13 05:02:02.546616 containerd[1982]: time="2025-10-13T05:02:02.546572443Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cebb2b01adc358c979d8f8331b4b08ce698bf710ebfa18683dfae832cfc42b58\" id:\"bd81b65dbc50fe7c1af8dea2f5276d8baa53d1707e9510bf71c9286322cbf7b4\" pid:6137 exited_at:{seconds:1760331722 nanos:544988391}" Oct 13 05:02:04.213890 containerd[1982]: time="2025-10-13T05:02:04.213830711Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cebb2b01adc358c979d8f8331b4b08ce698bf710ebfa18683dfae832cfc42b58\" id:\"9a0f806488bb3774b1476e1b892ae9df77cb3c7f29253d958225320d406f8cba\" pid:6159 exited_at:{seconds:1760331724 nanos:213536111}" Oct 13 05:02:05.175696 systemd[1]: Started sshd@9-10.200.20.16:22-10.200.16.10:58262.service - OpenSSH per-connection server daemon (10.200.16.10:58262). Oct 13 05:02:05.610631 sshd[6169]: Accepted publickey for core from 10.200.16.10 port 58262 ssh2: RSA SHA256:0u0fQSSzne3hQhq8oltmzZFBGtzHLYbFgWa9+RtcBLA Oct 13 05:02:05.611770 sshd-session[6169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:02:05.615538 systemd-logind[1953]: New session 12 of user core. Oct 13 05:02:05.619591 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 13 05:02:06.003723 sshd[6172]: Connection closed by 10.200.16.10 port 58262 Oct 13 05:02:06.003458 sshd-session[6169]: pam_unix(sshd:session): session closed for user core Oct 13 05:02:06.007442 systemd-logind[1953]: Session 12 logged out. Waiting for processes to exit. Oct 13 05:02:06.007821 systemd[1]: sshd@9-10.200.20.16:22-10.200.16.10:58262.service: Deactivated successfully. Oct 13 05:02:06.011355 systemd[1]: session-12.scope: Deactivated successfully. Oct 13 05:02:06.015244 systemd-logind[1953]: Removed session 12. Oct 13 05:02:09.431604 containerd[1982]: time="2025-10-13T05:02:09.431556940Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cee149a7801fe55dfaf1f36bc205acde6e5a70e853cefb0018fd5d7631a29082\" id:\"d382ceb196418008c11279ec9a538957c3e30c9608ee24e6aeb90540bafdf893\" pid:6196 exited_at:{seconds:1760331729 nanos:431248716}" Oct 13 05:02:11.086816 systemd[1]: Started sshd@10-10.200.20.16:22-10.200.16.10:50994.service - OpenSSH per-connection server daemon (10.200.16.10:50994). Oct 13 05:02:11.530127 sshd[6209]: Accepted publickey for core from 10.200.16.10 port 50994 ssh2: RSA SHA256:0u0fQSSzne3hQhq8oltmzZFBGtzHLYbFgWa9+RtcBLA Oct 13 05:02:11.531356 sshd-session[6209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:02:11.535607 systemd-logind[1953]: New session 13 of user core. Oct 13 05:02:11.542607 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 13 05:02:11.907216 sshd[6212]: Connection closed by 10.200.16.10 port 50994 Oct 13 05:02:11.907748 sshd-session[6209]: pam_unix(sshd:session): session closed for user core Oct 13 05:02:11.911290 systemd[1]: sshd@10-10.200.20.16:22-10.200.16.10:50994.service: Deactivated successfully. Oct 13 05:02:11.913000 systemd[1]: session-13.scope: Deactivated successfully. Oct 13 05:02:11.913875 systemd-logind[1953]: Session 13 logged out. Waiting for processes to exit. Oct 13 05:02:11.915080 systemd-logind[1953]: Removed session 13. Oct 13 05:02:11.988570 systemd[1]: Started sshd@11-10.200.20.16:22-10.200.16.10:51000.service - OpenSSH per-connection server daemon (10.200.16.10:51000). Oct 13 05:02:12.417705 sshd[6224]: Accepted publickey for core from 10.200.16.10 port 51000 ssh2: RSA SHA256:0u0fQSSzne3hQhq8oltmzZFBGtzHLYbFgWa9+RtcBLA Oct 13 05:02:12.419951 sshd-session[6224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:02:12.425650 systemd-logind[1953]: New session 14 of user core. Oct 13 05:02:12.432619 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 13 05:02:12.826593 sshd[6227]: Connection closed by 10.200.16.10 port 51000 Oct 13 05:02:12.827380 sshd-session[6224]: pam_unix(sshd:session): session closed for user core Oct 13 05:02:12.831749 systemd[1]: sshd@11-10.200.20.16:22-10.200.16.10:51000.service: Deactivated successfully. Oct 13 05:02:12.832164 systemd-logind[1953]: Session 14 logged out. Waiting for processes to exit. Oct 13 05:02:12.835134 systemd[1]: session-14.scope: Deactivated successfully. Oct 13 05:02:12.838978 systemd-logind[1953]: Removed session 14. Oct 13 05:02:12.903320 systemd[1]: Started sshd@12-10.200.20.16:22-10.200.16.10:51014.service - OpenSSH per-connection server daemon (10.200.16.10:51014). Oct 13 05:02:13.334706 sshd[6237]: Accepted publickey for core from 10.200.16.10 port 51014 ssh2: RSA SHA256:0u0fQSSzne3hQhq8oltmzZFBGtzHLYbFgWa9+RtcBLA Oct 13 05:02:13.335858 sshd-session[6237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:02:13.339762 systemd-logind[1953]: New session 15 of user core. Oct 13 05:02:13.346611 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 13 05:02:13.707589 sshd[6240]: Connection closed by 10.200.16.10 port 51014 Oct 13 05:02:13.708097 sshd-session[6237]: pam_unix(sshd:session): session closed for user core Oct 13 05:02:13.712144 systemd-logind[1953]: Session 15 logged out. Waiting for processes to exit. Oct 13 05:02:13.712833 systemd[1]: sshd@12-10.200.20.16:22-10.200.16.10:51014.service: Deactivated successfully. Oct 13 05:02:13.714747 systemd[1]: session-15.scope: Deactivated successfully. Oct 13 05:02:13.716017 systemd-logind[1953]: Removed session 15. Oct 13 05:02:18.786289 systemd[1]: Started sshd@13-10.200.20.16:22-10.200.16.10:51030.service - OpenSSH per-connection server daemon (10.200.16.10:51030). Oct 13 05:02:19.222753 sshd[6257]: Accepted publickey for core from 10.200.16.10 port 51030 ssh2: RSA SHA256:0u0fQSSzne3hQhq8oltmzZFBGtzHLYbFgWa9+RtcBLA Oct 13 05:02:19.223939 sshd-session[6257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:02:19.228314 systemd-logind[1953]: New session 16 of user core. Oct 13 05:02:19.237621 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 13 05:02:19.582484 sshd[6260]: Connection closed by 10.200.16.10 port 51030 Oct 13 05:02:19.583026 sshd-session[6257]: pam_unix(sshd:session): session closed for user core Oct 13 05:02:19.586224 systemd[1]: sshd@13-10.200.20.16:22-10.200.16.10:51030.service: Deactivated successfully. Oct 13 05:02:19.588180 systemd[1]: session-16.scope: Deactivated successfully. Oct 13 05:02:19.589507 systemd-logind[1953]: Session 16 logged out. Waiting for processes to exit. Oct 13 05:02:19.591099 systemd-logind[1953]: Removed session 16. Oct 13 05:02:24.664937 systemd[1]: Started sshd@14-10.200.20.16:22-10.200.16.10:55100.service - OpenSSH per-connection server daemon (10.200.16.10:55100). Oct 13 05:02:25.098268 sshd[6280]: Accepted publickey for core from 10.200.16.10 port 55100 ssh2: RSA SHA256:0u0fQSSzne3hQhq8oltmzZFBGtzHLYbFgWa9+RtcBLA Oct 13 05:02:25.099428 sshd-session[6280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:02:25.103295 systemd-logind[1953]: New session 17 of user core. Oct 13 05:02:25.108609 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 13 05:02:25.463594 sshd[6283]: Connection closed by 10.200.16.10 port 55100 Oct 13 05:02:25.464105 sshd-session[6280]: pam_unix(sshd:session): session closed for user core Oct 13 05:02:25.467764 systemd[1]: sshd@14-10.200.20.16:22-10.200.16.10:55100.service: Deactivated successfully. Oct 13 05:02:25.469813 systemd[1]: session-17.scope: Deactivated successfully. Oct 13 05:02:25.470544 systemd-logind[1953]: Session 17 logged out. Waiting for processes to exit. Oct 13 05:02:25.471839 systemd-logind[1953]: Removed session 17. Oct 13 05:02:26.546841 containerd[1982]: time="2025-10-13T05:02:26.546792616Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f26bc2d2c0d9159be10b350eafac89e0842b8ce20fca1b5111b8b1a2f2a7a626\" id:\"2c1894d7c5ce1e003b4253d27c9abb7e73700ebcf4644dac09f411cec5ab649f\" pid:6306 exited_at:{seconds:1760331746 nanos:546460951}" Oct 13 05:02:30.537556 systemd[1]: Started sshd@15-10.200.20.16:22-10.200.16.10:48450.service - OpenSSH per-connection server daemon (10.200.16.10:48450). Oct 13 05:02:30.664588 containerd[1982]: time="2025-10-13T05:02:30.664545598Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cee149a7801fe55dfaf1f36bc205acde6e5a70e853cefb0018fd5d7631a29082\" id:\"7a9fac057febe5dee4ee92c2accd2fef25e5cffe9894c886923ae007ec7f43c9\" pid:6332 exited_at:{seconds:1760331750 nanos:663774873}" Oct 13 05:02:30.974304 sshd[6317]: Accepted publickey for core from 10.200.16.10 port 48450 ssh2: RSA SHA256:0u0fQSSzne3hQhq8oltmzZFBGtzHLYbFgWa9+RtcBLA Oct 13 05:02:30.975813 sshd-session[6317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:02:30.980268 systemd-logind[1953]: New session 18 of user core. Oct 13 05:02:30.986751 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 13 05:02:31.330645 sshd[6342]: Connection closed by 10.200.16.10 port 48450 Oct 13 05:02:31.330056 sshd-session[6317]: pam_unix(sshd:session): session closed for user core Oct 13 05:02:31.333158 systemd-logind[1953]: Session 18 logged out. Waiting for processes to exit. Oct 13 05:02:31.333304 systemd[1]: sshd@15-10.200.20.16:22-10.200.16.10:48450.service: Deactivated successfully. Oct 13 05:02:31.335088 systemd[1]: session-18.scope: Deactivated successfully. Oct 13 05:02:31.337304 systemd-logind[1953]: Removed session 18. Oct 13 05:02:32.548372 containerd[1982]: time="2025-10-13T05:02:32.548231073Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cebb2b01adc358c979d8f8331b4b08ce698bf710ebfa18683dfae832cfc42b58\" id:\"760bce73f73f6ca157a68fef491b236ecd7efc42f5db66b8dec973ed04477da7\" pid:6387 exited_at:{seconds:1760331752 nanos:548021339}" Oct 13 05:02:36.411396 systemd[1]: Started sshd@16-10.200.20.16:22-10.200.16.10:48452.service - OpenSSH per-connection server daemon (10.200.16.10:48452). Oct 13 05:02:36.850735 sshd[6397]: Accepted publickey for core from 10.200.16.10 port 48452 ssh2: RSA SHA256:0u0fQSSzne3hQhq8oltmzZFBGtzHLYbFgWa9+RtcBLA Oct 13 05:02:36.852305 sshd-session[6397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:02:36.858095 systemd-logind[1953]: New session 19 of user core. Oct 13 05:02:36.865460 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 13 05:02:37.220617 sshd[6400]: Connection closed by 10.200.16.10 port 48452 Oct 13 05:02:37.221429 sshd-session[6397]: pam_unix(sshd:session): session closed for user core Oct 13 05:02:37.225105 systemd[1]: sshd@16-10.200.20.16:22-10.200.16.10:48452.service: Deactivated successfully. Oct 13 05:02:37.227866 systemd[1]: session-19.scope: Deactivated successfully. Oct 13 05:02:37.228513 systemd-logind[1953]: Session 19 logged out. Waiting for processes to exit. Oct 13 05:02:37.229599 systemd-logind[1953]: Removed session 19. Oct 13 05:02:37.294366 systemd[1]: Started sshd@17-10.200.20.16:22-10.200.16.10:48462.service - OpenSSH per-connection server daemon (10.200.16.10:48462). Oct 13 05:02:37.714688 sshd[6411]: Accepted publickey for core from 10.200.16.10 port 48462 ssh2: RSA SHA256:0u0fQSSzne3hQhq8oltmzZFBGtzHLYbFgWa9+RtcBLA Oct 13 05:02:37.715723 sshd-session[6411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:02:37.719687 systemd-logind[1953]: New session 20 of user core. Oct 13 05:02:37.722587 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 13 05:02:38.234704 sshd[6414]: Connection closed by 10.200.16.10 port 48462 Oct 13 05:02:38.236707 sshd-session[6411]: pam_unix(sshd:session): session closed for user core Oct 13 05:02:38.240464 systemd[1]: sshd@17-10.200.20.16:22-10.200.16.10:48462.service: Deactivated successfully. Oct 13 05:02:38.243177 systemd[1]: session-20.scope: Deactivated successfully. Oct 13 05:02:38.246633 systemd-logind[1953]: Session 20 logged out. Waiting for processes to exit. Oct 13 05:02:38.249069 systemd-logind[1953]: Removed session 20. Oct 13 05:02:38.316312 systemd[1]: Started sshd@18-10.200.20.16:22-10.200.16.10:48474.service - OpenSSH per-connection server daemon (10.200.16.10:48474). Oct 13 05:02:38.757272 sshd[6424]: Accepted publickey for core from 10.200.16.10 port 48474 ssh2: RSA SHA256:0u0fQSSzne3hQhq8oltmzZFBGtzHLYbFgWa9+RtcBLA Oct 13 05:02:38.757641 sshd-session[6424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:02:38.762429 systemd-logind[1953]: New session 21 of user core. Oct 13 05:02:38.767635 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 13 05:02:39.464906 sshd[6427]: Connection closed by 10.200.16.10 port 48474 Oct 13 05:02:39.466856 sshd-session[6424]: pam_unix(sshd:session): session closed for user core Oct 13 05:02:39.470083 systemd[1]: sshd@18-10.200.20.16:22-10.200.16.10:48474.service: Deactivated successfully. Oct 13 05:02:39.473405 systemd[1]: session-21.scope: Deactivated successfully. Oct 13 05:02:39.475988 systemd-logind[1953]: Session 21 logged out. Waiting for processes to exit. Oct 13 05:02:39.478074 systemd-logind[1953]: Removed session 21. Oct 13 05:02:39.542260 systemd[1]: Started sshd@19-10.200.20.16:22-10.200.16.10:48482.service - OpenSSH per-connection server daemon (10.200.16.10:48482). Oct 13 05:02:39.981878 sshd[6451]: Accepted publickey for core from 10.200.16.10 port 48482 ssh2: RSA SHA256:0u0fQSSzne3hQhq8oltmzZFBGtzHLYbFgWa9+RtcBLA Oct 13 05:02:39.985732 sshd-session[6451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:02:39.994802 systemd-logind[1953]: New session 22 of user core. Oct 13 05:02:39.999648 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 13 05:02:40.462027 sshd[6454]: Connection closed by 10.200.16.10 port 48482 Oct 13 05:02:40.462610 sshd-session[6451]: pam_unix(sshd:session): session closed for user core Oct 13 05:02:40.467727 systemd-logind[1953]: Session 22 logged out. Waiting for processes to exit. Oct 13 05:02:40.468027 systemd[1]: session-22.scope: Deactivated successfully. Oct 13 05:02:40.469414 systemd[1]: sshd@19-10.200.20.16:22-10.200.16.10:48482.service: Deactivated successfully. Oct 13 05:02:40.473342 systemd-logind[1953]: Removed session 22. Oct 13 05:02:40.542253 systemd[1]: Started sshd@20-10.200.20.16:22-10.200.16.10:48324.service - OpenSSH per-connection server daemon (10.200.16.10:48324). Oct 13 05:02:40.964740 sshd[6464]: Accepted publickey for core from 10.200.16.10 port 48324 ssh2: RSA SHA256:0u0fQSSzne3hQhq8oltmzZFBGtzHLYbFgWa9+RtcBLA Oct 13 05:02:40.965941 sshd-session[6464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:02:40.969494 systemd-logind[1953]: New session 23 of user core. Oct 13 05:02:40.976624 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 13 05:02:41.354328 sshd[6467]: Connection closed by 10.200.16.10 port 48324 Oct 13 05:02:41.355681 sshd-session[6464]: pam_unix(sshd:session): session closed for user core Oct 13 05:02:41.360507 systemd-logind[1953]: Session 23 logged out. Waiting for processes to exit. Oct 13 05:02:41.360750 systemd[1]: sshd@20-10.200.20.16:22-10.200.16.10:48324.service: Deactivated successfully. Oct 13 05:02:41.364383 systemd[1]: session-23.scope: Deactivated successfully. Oct 13 05:02:41.367883 systemd-logind[1953]: Removed session 23. Oct 13 05:02:46.435411 systemd[1]: Started sshd@21-10.200.20.16:22-10.200.16.10:48340.service - OpenSSH per-connection server daemon (10.200.16.10:48340). Oct 13 05:02:46.866693 sshd[6481]: Accepted publickey for core from 10.200.16.10 port 48340 ssh2: RSA SHA256:0u0fQSSzne3hQhq8oltmzZFBGtzHLYbFgWa9+RtcBLA Oct 13 05:02:46.867822 sshd-session[6481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:02:46.871519 systemd-logind[1953]: New session 24 of user core. Oct 13 05:02:46.876611 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 13 05:02:47.220405 sshd[6486]: Connection closed by 10.200.16.10 port 48340 Oct 13 05:02:47.220961 sshd-session[6481]: pam_unix(sshd:session): session closed for user core Oct 13 05:02:47.224771 systemd[1]: sshd@21-10.200.20.16:22-10.200.16.10:48340.service: Deactivated successfully. Oct 13 05:02:47.226646 systemd[1]: session-24.scope: Deactivated successfully. Oct 13 05:02:47.227326 systemd-logind[1953]: Session 24 logged out. Waiting for processes to exit. Oct 13 05:02:47.228451 systemd-logind[1953]: Removed session 24. Oct 13 05:02:52.309463 systemd[1]: Started sshd@22-10.200.20.16:22-10.200.16.10:58600.service - OpenSSH per-connection server daemon (10.200.16.10:58600). Oct 13 05:02:52.766559 sshd[6497]: Accepted publickey for core from 10.200.16.10 port 58600 ssh2: RSA SHA256:0u0fQSSzne3hQhq8oltmzZFBGtzHLYbFgWa9+RtcBLA Oct 13 05:02:52.768996 sshd-session[6497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:02:52.774667 systemd-logind[1953]: New session 25 of user core. Oct 13 05:02:52.781796 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 13 05:02:53.158325 sshd[6500]: Connection closed by 10.200.16.10 port 58600 Oct 13 05:02:53.159685 sshd-session[6497]: pam_unix(sshd:session): session closed for user core Oct 13 05:02:53.164410 systemd[1]: sshd@22-10.200.20.16:22-10.200.16.10:58600.service: Deactivated successfully. Oct 13 05:02:53.168728 systemd[1]: session-25.scope: Deactivated successfully. Oct 13 05:02:53.169717 systemd-logind[1953]: Session 25 logged out. Waiting for processes to exit. Oct 13 05:02:53.171729 systemd-logind[1953]: Removed session 25. Oct 13 05:02:56.554050 containerd[1982]: time="2025-10-13T05:02:56.553903118Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f26bc2d2c0d9159be10b350eafac89e0842b8ce20fca1b5111b8b1a2f2a7a626\" id:\"98790ff61766d1786db3b9777a9c04ba1dd1c4141966625771c9c52771913130\" pid:6521 exited_at:{seconds:1760331776 nanos:553584926}" Oct 13 05:02:58.233689 systemd[1]: Started sshd@23-10.200.20.16:22-10.200.16.10:58616.service - OpenSSH per-connection server daemon (10.200.16.10:58616). Oct 13 05:02:58.667971 sshd[6534]: Accepted publickey for core from 10.200.16.10 port 58616 ssh2: RSA SHA256:0u0fQSSzne3hQhq8oltmzZFBGtzHLYbFgWa9+RtcBLA Oct 13 05:02:58.669132 sshd-session[6534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:02:58.672672 systemd-logind[1953]: New session 26 of user core. Oct 13 05:02:58.676625 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 13 05:02:59.025135 sshd[6537]: Connection closed by 10.200.16.10 port 58616 Oct 13 05:02:59.028123 sshd-session[6534]: pam_unix(sshd:session): session closed for user core Oct 13 05:02:59.032122 systemd-logind[1953]: Session 26 logged out. Waiting for processes to exit. Oct 13 05:02:59.032188 systemd[1]: sshd@23-10.200.20.16:22-10.200.16.10:58616.service: Deactivated successfully. Oct 13 05:02:59.035140 systemd[1]: session-26.scope: Deactivated successfully. Oct 13 05:02:59.036761 systemd-logind[1953]: Removed session 26. Oct 13 05:03:00.643054 containerd[1982]: time="2025-10-13T05:03:00.643015541Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cee149a7801fe55dfaf1f36bc205acde6e5a70e853cefb0018fd5d7631a29082\" id:\"df8ff0f02c213138aae382c5bf5508ac7d487ad834a30383c380676d8251c1f6\" pid:6562 exited_at:{seconds:1760331780 nanos:642567137}" Oct 13 05:03:02.592158 containerd[1982]: time="2025-10-13T05:03:02.592115926Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cebb2b01adc358c979d8f8331b4b08ce698bf710ebfa18683dfae832cfc42b58\" id:\"642587a7176c049e540494b6639fa6a221ee9711231e74fcf6a6abbbc857e19c\" pid:6583 exited_at:{seconds:1760331782 nanos:591663985}" Oct 13 05:03:04.098703 systemd[1]: Started sshd@24-10.200.20.16:22-10.200.16.10:33946.service - OpenSSH per-connection server daemon (10.200.16.10:33946). Oct 13 05:03:04.226543 containerd[1982]: time="2025-10-13T05:03:04.225673043Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cebb2b01adc358c979d8f8331b4b08ce698bf710ebfa18683dfae832cfc42b58\" id:\"eefc734c4a698a749c54c547a21227eed2661d995a76fdf35e999d599b479406\" pid:6609 exited_at:{seconds:1760331784 nanos:225459941}" Oct 13 05:03:04.527850 sshd[6594]: Accepted publickey for core from 10.200.16.10 port 33946 ssh2: RSA SHA256:0u0fQSSzne3hQhq8oltmzZFBGtzHLYbFgWa9+RtcBLA Oct 13 05:03:04.529147 sshd-session[6594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:03:04.534899 systemd-logind[1953]: New session 27 of user core. Oct 13 05:03:04.539586 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 13 05:03:04.889588 sshd[6618]: Connection closed by 10.200.16.10 port 33946 Oct 13 05:03:04.889000 sshd-session[6594]: pam_unix(sshd:session): session closed for user core Oct 13 05:03:04.892621 systemd[1]: sshd@24-10.200.20.16:22-10.200.16.10:33946.service: Deactivated successfully. Oct 13 05:03:04.892978 systemd-logind[1953]: Session 27 logged out. Waiting for processes to exit. Oct 13 05:03:04.894843 systemd[1]: session-27.scope: Deactivated successfully. Oct 13 05:03:04.897021 systemd-logind[1953]: Removed session 27. Oct 13 05:03:09.438521 containerd[1982]: time="2025-10-13T05:03:09.437701482Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cee149a7801fe55dfaf1f36bc205acde6e5a70e853cefb0018fd5d7631a29082\" id:\"a0f3439696688d6d8cd19e54118b5c976ed3b7d6656822f793309ac385b5aefa\" pid:6644 exited_at:{seconds:1760331789 nanos:437353016}" Oct 13 05:03:09.973183 systemd[1]: Started sshd@25-10.200.20.16:22-10.200.16.10:33962.service - OpenSSH per-connection server daemon (10.200.16.10:33962). Oct 13 05:03:10.404451 sshd[6655]: Accepted publickey for core from 10.200.16.10 port 33962 ssh2: RSA SHA256:0u0fQSSzne3hQhq8oltmzZFBGtzHLYbFgWa9+RtcBLA Oct 13 05:03:10.405622 sshd-session[6655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:03:10.411868 systemd-logind[1953]: New session 28 of user core. Oct 13 05:03:10.417614 systemd[1]: Started session-28.scope - Session 28 of User core. Oct 13 05:03:10.773591 sshd[6660]: Connection closed by 10.200.16.10 port 33962 Oct 13 05:03:10.773909 sshd-session[6655]: pam_unix(sshd:session): session closed for user core Oct 13 05:03:10.780218 systemd[1]: sshd@25-10.200.20.16:22-10.200.16.10:33962.service: Deactivated successfully. Oct 13 05:03:10.782789 systemd[1]: session-28.scope: Deactivated successfully. Oct 13 05:03:10.784207 systemd-logind[1953]: Session 28 logged out. Waiting for processes to exit. Oct 13 05:03:10.786006 systemd-logind[1953]: Removed session 28.