May 15 11:55:47.031462 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] May 15 11:55:47.031480 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Thu May 15 10:40:40 -00 2025 May 15 11:55:47.031486 kernel: KASLR enabled May 15 11:55:47.031490 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') May 15 11:55:47.031494 kernel: printk: legacy bootconsole [pl11] enabled May 15 11:55:47.031498 kernel: efi: EFI v2.7 by EDK II May 15 11:55:47.031503 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e018 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 May 15 11:55:47.031507 kernel: random: crng init done May 15 11:55:47.031511 kernel: secureboot: Secure boot disabled May 15 11:55:47.031515 kernel: ACPI: Early table checksum verification disabled May 15 11:55:47.031519 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) May 15 11:55:47.031523 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 15 11:55:47.031527 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) May 15 11:55:47.031531 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) May 15 11:55:47.031536 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) May 15 11:55:47.031541 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 15 11:55:47.031545 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 15 11:55:47.031550 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 15 11:55:47.031554 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) May 15 11:55:47.031558 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) May 15 11:55:47.031562 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) May 15 11:55:47.031566 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 15 11:55:47.031570 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 May 15 11:55:47.031574 kernel: ACPI: Use ACPI SPCR as default console: Yes May 15 11:55:47.031579 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug May 15 11:55:47.031583 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug May 15 11:55:47.031587 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug May 15 11:55:47.031591 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug May 15 11:55:47.031595 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug May 15 11:55:47.031600 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug May 15 11:55:47.031605 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug May 15 11:55:47.031609 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug May 15 11:55:47.031613 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug May 15 11:55:47.031617 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug May 15 11:55:47.031621 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug May 15 11:55:47.031625 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug May 15 11:55:47.031630 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] May 15 11:55:47.031634 kernel: NODE_DATA(0) allocated [mem 0x1bf7fddc0-0x1bf804fff] May 15 11:55:47.031638 kernel: Zone ranges: May 15 11:55:47.031642 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] May 15 11:55:47.031649 kernel: DMA32 empty May 15 11:55:47.031653 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] May 15 11:55:47.031658 kernel: Device empty May 15 11:55:47.031662 kernel: Movable zone start for each node May 15 11:55:47.031666 kernel: Early memory node ranges May 15 11:55:47.031671 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] May 15 11:55:47.031676 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] May 15 11:55:47.031680 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] May 15 11:55:47.031684 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] May 15 11:55:47.031689 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] May 15 11:55:47.031693 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] May 15 11:55:47.031697 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] May 15 11:55:47.031701 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] May 15 11:55:47.031706 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] May 15 11:55:47.031710 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] May 15 11:55:47.031714 kernel: On node 0, zone DMA: 36 pages in unavailable ranges May 15 11:55:47.031719 kernel: psci: probing for conduit method from ACPI. May 15 11:55:47.031724 kernel: psci: PSCIv1.1 detected in firmware. May 15 11:55:47.031728 kernel: psci: Using standard PSCI v0.2 function IDs May 15 11:55:47.031732 kernel: psci: MIGRATE_INFO_TYPE not supported. May 15 11:55:47.031737 kernel: psci: SMC Calling Convention v1.4 May 15 11:55:47.031741 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 May 15 11:55:47.031745 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 May 15 11:55:47.031749 kernel: percpu: Embedded 33 pages/cpu s98136 r8192 d28840 u135168 May 15 11:55:47.031754 kernel: pcpu-alloc: s98136 r8192 d28840 u135168 alloc=33*4096 May 15 11:55:47.031758 kernel: pcpu-alloc: [0] 0 [0] 1 May 15 11:55:47.031763 kernel: Detected PIPT I-cache on CPU0 May 15 11:55:47.031767 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) May 15 11:55:47.031772 kernel: CPU features: detected: GIC system register CPU interface May 15 11:55:47.031776 kernel: CPU features: detected: Spectre-v4 May 15 11:55:47.031781 kernel: CPU features: detected: Spectre-BHB May 15 11:55:47.031785 kernel: CPU features: kernel page table isolation forced ON by KASLR May 15 11:55:47.031789 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 15 11:55:47.031794 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 May 15 11:55:47.031798 kernel: CPU features: detected: SSBS not fully self-synchronizing May 15 11:55:47.031802 kernel: alternatives: applying boot alternatives May 15 11:55:47.031807 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=bf509bd8a8efc068ea7b7cbdc99b42bf1cbaf8a0ba93f67c8f1cf632dc3496d8 May 15 11:55:47.031812 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 11:55:47.031817 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 11:55:47.031822 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 11:55:47.031826 kernel: Fallback order for Node 0: 0 May 15 11:55:47.031830 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 May 15 11:55:47.031834 kernel: Policy zone: Normal May 15 11:55:47.031839 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 11:55:47.031843 kernel: software IO TLB: area num 2. May 15 11:55:47.031847 kernel: software IO TLB: mapped [mem 0x000000003a460000-0x000000003e460000] (64MB) May 15 11:55:47.031852 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 15 11:55:47.031856 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 11:55:47.031861 kernel: rcu: RCU event tracing is enabled. May 15 11:55:47.031865 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 15 11:55:47.031871 kernel: Trampoline variant of Tasks RCU enabled. May 15 11:55:47.031875 kernel: Tracing variant of Tasks RCU enabled. May 15 11:55:47.031879 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 11:55:47.031884 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 15 11:55:47.031888 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 11:55:47.031892 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 11:55:47.031897 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 15 11:55:47.031901 kernel: GICv3: 960 SPIs implemented May 15 11:55:47.031905 kernel: GICv3: 0 Extended SPIs implemented May 15 11:55:47.031910 kernel: Root IRQ handler: gic_handle_irq May 15 11:55:47.031914 kernel: GICv3: GICv3 features: 16 PPIs, RSS May 15 11:55:47.031918 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 May 15 11:55:47.031923 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 May 15 11:55:47.031928 kernel: ITS: No ITS available, not enabling LPIs May 15 11:55:47.031932 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 15 11:55:47.031936 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). May 15 11:55:47.031941 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 15 11:55:47.031945 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns May 15 11:55:47.031950 kernel: Console: colour dummy device 80x25 May 15 11:55:47.031954 kernel: printk: legacy console [tty1] enabled May 15 11:55:47.031959 kernel: ACPI: Core revision 20240827 May 15 11:55:47.031964 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) May 15 11:55:47.031969 kernel: pid_max: default: 32768 minimum: 301 May 15 11:55:47.031973 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 15 11:55:47.031978 kernel: landlock: Up and running. May 15 11:55:47.031982 kernel: SELinux: Initializing. May 15 11:55:47.031987 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 11:55:47.031991 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 11:55:47.031999 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x1a0000e, misc 0x31e1 May 15 11:55:47.032004 kernel: Hyper-V: Host Build 10.0.26100.1254-1-0 May 15 11:55:47.032009 kernel: Hyper-V: enabling crash_kexec_post_notifiers May 15 11:55:47.032014 kernel: rcu: Hierarchical SRCU implementation. May 15 11:55:47.032019 kernel: rcu: Max phase no-delay instances is 400. May 15 11:55:47.032023 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 15 11:55:47.032029 kernel: Remapping and enabling EFI services. May 15 11:55:47.032033 kernel: smp: Bringing up secondary CPUs ... May 15 11:55:47.032038 kernel: Detected PIPT I-cache on CPU1 May 15 11:55:47.032043 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 May 15 11:55:47.032047 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] May 15 11:55:47.032053 kernel: smp: Brought up 1 node, 2 CPUs May 15 11:55:47.032058 kernel: SMP: Total of 2 processors activated. May 15 11:55:47.032062 kernel: CPU: All CPU(s) started at EL1 May 15 11:55:47.032067 kernel: CPU features: detected: 32-bit EL0 Support May 15 11:55:47.032072 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence May 15 11:55:47.032077 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 15 11:55:47.032081 kernel: CPU features: detected: Common not Private translations May 15 11:55:47.032086 kernel: CPU features: detected: CRC32 instructions May 15 11:55:47.032091 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) May 15 11:55:47.032096 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 15 11:55:47.032101 kernel: CPU features: detected: LSE atomic instructions May 15 11:55:47.032106 kernel: CPU features: detected: Privileged Access Never May 15 11:55:47.032110 kernel: CPU features: detected: Speculation barrier (SB) May 15 11:55:47.032115 kernel: CPU features: detected: TLB range maintenance instructions May 15 11:55:47.032120 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 15 11:55:47.032125 kernel: CPU features: detected: Scalable Vector Extension May 15 11:55:47.032129 kernel: alternatives: applying system-wide alternatives May 15 11:55:47.032134 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 May 15 11:55:47.032139 kernel: SVE: maximum available vector length 16 bytes per vector May 15 11:55:47.032144 kernel: SVE: default vector length 16 bytes per vector May 15 11:55:47.032149 kernel: Memory: 3976108K/4194160K available (11072K kernel code, 2276K rwdata, 8928K rodata, 39424K init, 1034K bss, 213432K reserved, 0K cma-reserved) May 15 11:55:47.032154 kernel: devtmpfs: initialized May 15 11:55:47.032159 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 11:55:47.032164 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 15 11:55:47.032168 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 15 11:55:47.032173 kernel: 0 pages in range for non-PLT usage May 15 11:55:47.032178 kernel: 508544 pages in range for PLT usage May 15 11:55:47.032183 kernel: pinctrl core: initialized pinctrl subsystem May 15 11:55:47.032188 kernel: SMBIOS 3.1.0 present. May 15 11:55:47.032192 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 May 15 11:55:47.032197 kernel: DMI: Memory slots populated: 2/2 May 15 11:55:47.032202 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 11:55:47.032206 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 15 11:55:47.036845 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 15 11:55:47.036858 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 15 11:55:47.036863 kernel: audit: initializing netlink subsys (disabled) May 15 11:55:47.036874 kernel: audit: type=2000 audit(0.063:1): state=initialized audit_enabled=0 res=1 May 15 11:55:47.036879 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 11:55:47.036884 kernel: cpuidle: using governor menu May 15 11:55:47.036888 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 15 11:55:47.036893 kernel: ASID allocator initialised with 32768 entries May 15 11:55:47.036898 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 11:55:47.036903 kernel: Serial: AMBA PL011 UART driver May 15 11:55:47.036907 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 15 11:55:47.036912 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 15 11:55:47.036918 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 15 11:55:47.036923 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 15 11:55:47.036928 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 15 11:55:47.036932 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 15 11:55:47.036937 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 15 11:55:47.036942 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 15 11:55:47.036947 kernel: ACPI: Added _OSI(Module Device) May 15 11:55:47.036951 kernel: ACPI: Added _OSI(Processor Device) May 15 11:55:47.036956 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 11:55:47.036962 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 11:55:47.036967 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 11:55:47.036971 kernel: ACPI: Interpreter enabled May 15 11:55:47.036976 kernel: ACPI: Using GIC for interrupt routing May 15 11:55:47.036981 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA May 15 11:55:47.036986 kernel: printk: legacy console [ttyAMA0] enabled May 15 11:55:47.036991 kernel: printk: legacy bootconsole [pl11] disabled May 15 11:55:47.036995 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA May 15 11:55:47.037000 kernel: ACPI: CPU0 has been hot-added May 15 11:55:47.037006 kernel: ACPI: CPU1 has been hot-added May 15 11:55:47.037010 kernel: iommu: Default domain type: Translated May 15 11:55:47.037015 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 15 11:55:47.037020 kernel: efivars: Registered efivars operations May 15 11:55:47.037024 kernel: vgaarb: loaded May 15 11:55:47.037029 kernel: clocksource: Switched to clocksource arch_sys_counter May 15 11:55:47.037034 kernel: VFS: Disk quotas dquot_6.6.0 May 15 11:55:47.037039 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 11:55:47.037044 kernel: pnp: PnP ACPI init May 15 11:55:47.037049 kernel: pnp: PnP ACPI: found 0 devices May 15 11:55:47.037054 kernel: NET: Registered PF_INET protocol family May 15 11:55:47.037059 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 11:55:47.037064 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 11:55:47.037068 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 11:55:47.037073 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 11:55:47.037078 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 15 11:55:47.037083 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 11:55:47.037088 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 11:55:47.037093 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 11:55:47.037105 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 11:55:47.037111 kernel: PCI: CLS 0 bytes, default 64 May 15 11:55:47.037116 kernel: kvm [1]: HYP mode not available May 15 11:55:47.037123 kernel: Initialise system trusted keyrings May 15 11:55:47.037128 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 11:55:47.037133 kernel: Key type asymmetric registered May 15 11:55:47.037137 kernel: Asymmetric key parser 'x509' registered May 15 11:55:47.037142 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 15 11:55:47.037148 kernel: io scheduler mq-deadline registered May 15 11:55:47.037153 kernel: io scheduler kyber registered May 15 11:55:47.037158 kernel: io scheduler bfq registered May 15 11:55:47.037162 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 11:55:47.037167 kernel: thunder_xcv, ver 1.0 May 15 11:55:47.037172 kernel: thunder_bgx, ver 1.0 May 15 11:55:47.037176 kernel: nicpf, ver 1.0 May 15 11:55:47.037181 kernel: nicvf, ver 1.0 May 15 11:55:47.037328 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 15 11:55:47.037384 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-15T11:55:46 UTC (1747310146) May 15 11:55:47.037390 kernel: efifb: probing for efifb May 15 11:55:47.037395 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k May 15 11:55:47.037400 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 May 15 11:55:47.037405 kernel: efifb: scrolling: redraw May 15 11:55:47.037410 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 15 11:55:47.037414 kernel: Console: switching to colour frame buffer device 128x48 May 15 11:55:47.037419 kernel: fb0: EFI VGA frame buffer device May 15 11:55:47.037425 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... May 15 11:55:47.037430 kernel: hid: raw HID events driver (C) Jiri Kosina May 15 11:55:47.037435 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available May 15 11:55:47.037440 kernel: watchdog: NMI not fully supported May 15 11:55:47.037444 kernel: watchdog: Hard watchdog permanently disabled May 15 11:55:47.037449 kernel: NET: Registered PF_INET6 protocol family May 15 11:55:47.037454 kernel: Segment Routing with IPv6 May 15 11:55:47.037459 kernel: In-situ OAM (IOAM) with IPv6 May 15 11:55:47.037463 kernel: NET: Registered PF_PACKET protocol family May 15 11:55:47.037469 kernel: Key type dns_resolver registered May 15 11:55:47.037474 kernel: registered taskstats version 1 May 15 11:55:47.037478 kernel: Loading compiled-in X.509 certificates May 15 11:55:47.037483 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: 6c8c7c40bf8565fead88558d446d0157ca21f08d' May 15 11:55:47.037488 kernel: Demotion targets for Node 0: null May 15 11:55:47.037493 kernel: Key type .fscrypt registered May 15 11:55:47.037497 kernel: Key type fscrypt-provisioning registered May 15 11:55:47.037502 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 11:55:47.037507 kernel: ima: Allocated hash algorithm: sha1 May 15 11:55:47.037512 kernel: ima: No architecture policies found May 15 11:55:47.037517 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 15 11:55:47.037522 kernel: clk: Disabling unused clocks May 15 11:55:47.037527 kernel: PM: genpd: Disabling unused power domains May 15 11:55:47.037531 kernel: Warning: unable to open an initial console. May 15 11:55:47.037536 kernel: Freeing unused kernel memory: 39424K May 15 11:55:47.037541 kernel: Run /init as init process May 15 11:55:47.037546 kernel: with arguments: May 15 11:55:47.037550 kernel: /init May 15 11:55:47.037556 kernel: with environment: May 15 11:55:47.037560 kernel: HOME=/ May 15 11:55:47.037565 kernel: TERM=linux May 15 11:55:47.037570 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 11:55:47.037575 systemd[1]: Successfully made /usr/ read-only. May 15 11:55:47.037582 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 11:55:47.037588 systemd[1]: Detected virtualization microsoft. May 15 11:55:47.037593 systemd[1]: Detected architecture arm64. May 15 11:55:47.037598 systemd[1]: Running in initrd. May 15 11:55:47.037603 systemd[1]: No hostname configured, using default hostname. May 15 11:55:47.037609 systemd[1]: Hostname set to . May 15 11:55:47.037614 systemd[1]: Initializing machine ID from random generator. May 15 11:55:47.037619 systemd[1]: Queued start job for default target initrd.target. May 15 11:55:47.037624 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 11:55:47.037629 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 11:55:47.037636 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 15 11:55:47.037642 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 11:55:47.037647 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 15 11:55:47.037653 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 15 11:55:47.037659 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 15 11:55:47.037664 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 15 11:55:47.037669 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 11:55:47.037675 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 11:55:47.037681 systemd[1]: Reached target paths.target - Path Units. May 15 11:55:47.037686 systemd[1]: Reached target slices.target - Slice Units. May 15 11:55:47.037691 systemd[1]: Reached target swap.target - Swaps. May 15 11:55:47.037696 systemd[1]: Reached target timers.target - Timer Units. May 15 11:55:47.037701 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 15 11:55:47.037706 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 11:55:47.037712 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 11:55:47.037717 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 15 11:55:47.037723 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 11:55:47.037728 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 11:55:47.037733 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 11:55:47.037738 systemd[1]: Reached target sockets.target - Socket Units. May 15 11:55:47.037743 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 15 11:55:47.037749 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 11:55:47.037754 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 15 11:55:47.037759 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 15 11:55:47.037765 systemd[1]: Starting systemd-fsck-usr.service... May 15 11:55:47.037770 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 11:55:47.037775 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 11:55:47.037793 systemd-journald[224]: Collecting audit messages is disabled. May 15 11:55:47.037807 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 11:55:47.037813 systemd-journald[224]: Journal started May 15 11:55:47.037827 systemd-journald[224]: Runtime Journal (/run/log/journal/22345abb59c34ca3bf6f795791cc8f63) is 8M, max 78.5M, 70.5M free. May 15 11:55:47.037342 systemd-modules-load[226]: Inserted module 'overlay' May 15 11:55:47.052045 systemd[1]: Started systemd-journald.service - Journal Service. May 15 11:55:47.052615 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 15 11:55:47.071277 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 11:55:47.071293 kernel: Bridge firewalling registered May 15 11:55:47.063702 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 11:55:47.070176 systemd-modules-load[226]: Inserted module 'br_netfilter' May 15 11:55:47.074988 systemd[1]: Finished systemd-fsck-usr.service. May 15 11:55:47.090186 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 11:55:47.095455 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 11:55:47.104350 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 11:55:47.119759 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 11:55:47.129586 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 11:55:47.141994 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 11:55:47.153047 systemd-tmpfiles[251]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 15 11:55:47.157230 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 11:55:47.171461 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 11:55:47.182031 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 11:55:47.186919 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 11:55:47.193033 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 15 11:55:47.217794 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 11:55:47.222097 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 11:55:47.240014 dracut-cmdline[261]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=bf509bd8a8efc068ea7b7cbdc99b42bf1cbaf8a0ba93f67c8f1cf632dc3496d8 May 15 11:55:47.240813 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 11:55:47.285334 systemd-resolved[262]: Positive Trust Anchors: May 15 11:55:47.285346 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 11:55:47.285365 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 11:55:47.287419 systemd-resolved[262]: Defaulting to hostname 'linux'. May 15 11:55:47.288875 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 11:55:47.293373 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 11:55:47.377231 kernel: SCSI subsystem initialized May 15 11:55:47.382238 kernel: Loading iSCSI transport class v2.0-870. May 15 11:55:47.390256 kernel: iscsi: registered transport (tcp) May 15 11:55:47.402947 kernel: iscsi: registered transport (qla4xxx) May 15 11:55:47.402980 kernel: QLogic iSCSI HBA Driver May 15 11:55:47.415458 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 11:55:47.434459 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 11:55:47.445829 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 11:55:47.486055 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 15 11:55:47.491855 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 15 11:55:47.550224 kernel: raid6: neonx8 gen() 18536 MB/s May 15 11:55:47.566219 kernel: raid6: neonx4 gen() 18563 MB/s May 15 11:55:47.585217 kernel: raid6: neonx2 gen() 17085 MB/s May 15 11:55:47.605231 kernel: raid6: neonx1 gen() 15030 MB/s May 15 11:55:47.624219 kernel: raid6: int64x8 gen() 10541 MB/s May 15 11:55:47.643218 kernel: raid6: int64x4 gen() 10615 MB/s May 15 11:55:47.663298 kernel: raid6: int64x2 gen() 8985 MB/s May 15 11:55:47.684970 kernel: raid6: int64x1 gen() 6996 MB/s May 15 11:55:47.685023 kernel: raid6: using algorithm neonx4 gen() 18563 MB/s May 15 11:55:47.706363 kernel: raid6: .... xor() 15145 MB/s, rmw enabled May 15 11:55:47.706402 kernel: raid6: using neon recovery algorithm May 15 11:55:47.714471 kernel: xor: measuring software checksum speed May 15 11:55:47.714480 kernel: 8regs : 28645 MB/sec May 15 11:55:47.716716 kernel: 32regs : 28828 MB/sec May 15 11:55:47.719082 kernel: arm64_neon : 37707 MB/sec May 15 11:55:47.722025 kernel: xor: using function: arm64_neon (37707 MB/sec) May 15 11:55:47.760226 kernel: Btrfs loaded, zoned=no, fsverity=no May 15 11:55:47.765105 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 15 11:55:47.773311 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 11:55:47.801623 systemd-udevd[473]: Using default interface naming scheme 'v255'. May 15 11:55:47.805487 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 11:55:47.814309 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 15 11:55:47.842400 dracut-pre-trigger[483]: rd.md=0: removing MD RAID activation May 15 11:55:47.861238 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 15 11:55:47.867224 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 11:55:47.912435 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 11:55:47.923884 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 15 11:55:47.976232 kernel: hv_vmbus: Vmbus version:5.3 May 15 11:55:47.976964 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 11:55:47.980766 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 11:55:48.006412 kernel: pps_core: LinuxPPS API ver. 1 registered May 15 11:55:48.006437 kernel: hv_vmbus: registering driver hid_hyperv May 15 11:55:48.006446 kernel: hv_vmbus: registering driver hv_netvsc May 15 11:55:48.006459 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 May 15 11:55:48.006467 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on May 15 11:55:48.010441 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 11:55:48.019348 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 11:55:48.033564 kernel: hv_vmbus: registering driver hyperv_keyboard May 15 11:55:48.033577 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 15 11:55:48.047217 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 May 15 11:55:48.047527 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 15 11:55:48.048324 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 11:55:48.072522 kernel: PTP clock support registered May 15 11:55:48.072544 kernel: hv_vmbus: registering driver hv_storvsc May 15 11:55:48.072551 kernel: scsi host1: storvsc_host_t May 15 11:55:48.072660 kernel: scsi host0: storvsc_host_t May 15 11:55:48.048381 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 11:55:48.094141 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 May 15 11:55:48.094174 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 May 15 11:55:48.066620 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 11:55:48.111754 kernel: hv_utils: Registering HyperV Utility Driver May 15 11:55:48.111782 kernel: hv_vmbus: registering driver hv_utils May 15 11:55:48.124096 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) May 15 11:55:47.878373 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks May 15 11:55:47.887333 kernel: hv_utils: Heartbeat IC version 3.0 May 15 11:55:47.887346 kernel: sd 0:0:0:0: [sda] Write Protect is off May 15 11:55:47.887450 kernel: hv_utils: Shutdown IC version 3.2 May 15 11:55:47.887457 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 May 15 11:55:47.887524 kernel: hv_utils: TimeSync IC version 4.0 May 15 11:55:47.887531 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA May 15 11:55:47.887591 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 May 15 11:55:47.887657 kernel: hv_netvsc 000d3ac5-fcd4-000d-3ac5-fcd4000d3ac5 eth0: VF slot 1 added May 15 11:55:47.887714 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#140 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 May 15 11:55:47.887766 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 15 11:55:47.887771 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 15 11:55:47.887829 systemd-journald[224]: Time jumped backwards, rotating. May 15 11:55:47.887859 kernel: sr 0:0:0:2: [sr0] scsi-1 drive May 15 11:55:47.892310 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 15 11:55:47.892326 kernel: hv_vmbus: registering driver hv_pci May 15 11:55:47.892342 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 May 15 11:55:47.892520 kernel: hv_pci 57494db8-64e2-485a-aa80-8110f0440b00: PCI VMBus probing: Using version 0x10004 May 15 11:55:47.953610 kernel: hv_pci 57494db8-64e2-485a-aa80-8110f0440b00: PCI host bridge to bus 64e2:00 May 15 11:55:47.953697 kernel: pci_bus 64e2:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] May 15 11:55:47.953775 kernel: pci_bus 64e2:00: No busn resource found for root bus, will use [bus 00-ff] May 15 11:55:47.953836 kernel: pci 64e2:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint May 15 11:55:47.953905 kernel: pci 64e2:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] May 15 11:55:47.953964 kernel: pci 64e2:00:02.0: enabling Extended Tags May 15 11:55:47.954021 kernel: pci 64e2:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 64e2:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) May 15 11:55:47.954077 kernel: pci_bus 64e2:00: busn_res: [bus 00-ff] end is updated to 00 May 15 11:55:47.954129 kernel: pci 64e2:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned May 15 11:55:47.848561 systemd-resolved[262]: Clock change detected. Flushing caches. May 15 11:55:47.866104 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 11:55:47.979486 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#140 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 May 15 11:55:48.000484 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#179 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 May 15 11:55:48.018769 kernel: mlx5_core 64e2:00:02.0: enabling device (0000 -> 0002) May 15 11:55:48.204599 kernel: mlx5_core 64e2:00:02.0: PTM is not supported by PCIe May 15 11:55:48.204709 kernel: mlx5_core 64e2:00:02.0: firmware version: 16.30.5006 May 15 11:55:48.204781 kernel: hv_netvsc 000d3ac5-fcd4-000d-3ac5-fcd4000d3ac5 eth0: VF registering: eth1 May 15 11:55:48.204846 kernel: mlx5_core 64e2:00:02.0 eth1: joined to eth0 May 15 11:55:48.204912 kernel: mlx5_core 64e2:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) May 15 11:55:48.210452 kernel: mlx5_core 64e2:00:02.0 enP25826s1: renamed from eth1 May 15 11:55:48.546740 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. May 15 11:55:48.552318 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. May 15 11:55:48.568213 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 15 11:55:48.586357 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. May 15 11:55:48.597554 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 15 11:55:48.618658 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. May 15 11:55:48.636763 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#247 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 May 15 11:55:48.629130 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 15 11:55:48.640471 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 15 11:55:48.648739 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 11:55:48.658915 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 11:55:48.667919 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 15 11:55:48.684858 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 15 11:55:48.696655 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 15 11:55:49.685642 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#165 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 May 15 11:55:49.696456 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 15 11:55:49.696940 disk-uuid[651]: The operation has completed successfully. May 15 11:55:49.749691 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 11:55:49.749779 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 15 11:55:49.780889 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 15 11:55:49.804037 sh[819]: Success May 15 11:55:49.836217 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 11:55:49.836248 kernel: device-mapper: uevent: version 1.0.3 May 15 11:55:49.841223 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 15 11:55:49.851584 kernel: device-mapper: verity: sha256 using shash "sha256-ce" May 15 11:55:50.048323 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 15 11:55:50.053109 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 15 11:55:50.067532 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 15 11:55:50.091158 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 15 11:55:50.091186 kernel: BTRFS: device fsid 0a747134-9b18-4ef1-ad11-5025524c86c8 devid 1 transid 40 /dev/mapper/usr (254:0) scanned by mount (837) May 15 11:55:50.096443 kernel: BTRFS info (device dm-0): first mount of filesystem 0a747134-9b18-4ef1-ad11-5025524c86c8 May 15 11:55:50.100480 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 15 11:55:50.103362 kernel: BTRFS info (device dm-0): using free-space-tree May 15 11:55:50.421988 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 15 11:55:50.425936 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 15 11:55:50.432732 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 15 11:55:50.434549 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 15 11:55:50.441234 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 15 11:55:50.480488 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 (8:6) scanned by mount (876) May 15 11:55:50.480517 kernel: BTRFS info (device sda6): first mount of filesystem 3936141b-01f3-466e-a92a-4f7ff09b25a9 May 15 11:55:50.484771 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 15 11:55:50.487665 kernel: BTRFS info (device sda6): using free-space-tree May 15 11:55:50.515451 kernel: BTRFS info (device sda6): last unmount of filesystem 3936141b-01f3-466e-a92a-4f7ff09b25a9 May 15 11:55:50.516272 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 15 11:55:50.521241 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 15 11:55:50.555659 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 11:55:50.561335 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 11:55:50.599086 systemd-networkd[1006]: lo: Link UP May 15 11:55:50.599096 systemd-networkd[1006]: lo: Gained carrier May 15 11:55:50.600269 systemd-networkd[1006]: Enumeration completed May 15 11:55:50.601792 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 11:55:50.606022 systemd-networkd[1006]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 11:55:50.606024 systemd-networkd[1006]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 11:55:50.606406 systemd[1]: Reached target network.target - Network. May 15 11:55:50.678450 kernel: mlx5_core 64e2:00:02.0 enP25826s1: Link up May 15 11:55:50.711456 kernel: hv_netvsc 000d3ac5-fcd4-000d-3ac5-fcd4000d3ac5 eth0: Data path switched to VF: enP25826s1 May 15 11:55:50.711671 systemd-networkd[1006]: enP25826s1: Link UP May 15 11:55:50.711722 systemd-networkd[1006]: eth0: Link UP May 15 11:55:50.711781 systemd-networkd[1006]: eth0: Gained carrier May 15 11:55:50.711788 systemd-networkd[1006]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 11:55:50.717573 systemd-networkd[1006]: enP25826s1: Gained carrier May 15 11:55:50.737467 systemd-networkd[1006]: eth0: DHCPv4 address 10.200.20.23/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 15 11:55:51.726990 ignition[959]: Ignition 2.21.0 May 15 11:55:51.727003 ignition[959]: Stage: fetch-offline May 15 11:55:51.731057 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 15 11:55:51.727075 ignition[959]: no configs at "/usr/lib/ignition/base.d" May 15 11:55:51.737927 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 15 11:55:51.727081 ignition[959]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 15 11:55:51.727174 ignition[959]: parsed url from cmdline: "" May 15 11:55:51.727176 ignition[959]: no config URL provided May 15 11:55:51.727179 ignition[959]: reading system config file "/usr/lib/ignition/user.ign" May 15 11:55:51.727184 ignition[959]: no config at "/usr/lib/ignition/user.ign" May 15 11:55:51.727187 ignition[959]: failed to fetch config: resource requires networking May 15 11:55:51.727310 ignition[959]: Ignition finished successfully May 15 11:55:51.767017 ignition[1017]: Ignition 2.21.0 May 15 11:55:51.767027 ignition[1017]: Stage: fetch May 15 11:55:51.767178 ignition[1017]: no configs at "/usr/lib/ignition/base.d" May 15 11:55:51.767185 ignition[1017]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 15 11:55:51.767249 ignition[1017]: parsed url from cmdline: "" May 15 11:55:51.767251 ignition[1017]: no config URL provided May 15 11:55:51.767254 ignition[1017]: reading system config file "/usr/lib/ignition/user.ign" May 15 11:55:51.767259 ignition[1017]: no config at "/usr/lib/ignition/user.ign" May 15 11:55:51.767297 ignition[1017]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 May 15 11:55:51.831826 ignition[1017]: GET result: OK May 15 11:55:51.831889 ignition[1017]: config has been read from IMDS userdata May 15 11:55:51.831907 ignition[1017]: parsing config with SHA512: fed4c1dbdb992260016ed203d411a9edfb8b8ab457da6cb465951e16f24b590df85fce6e7ba553fe7c9f6503b7a46b64a79f14212f531a9081287df1ea96d555 May 15 11:55:51.836263 unknown[1017]: fetched base config from "system" May 15 11:55:51.836978 ignition[1017]: fetch: fetch complete May 15 11:55:51.836272 unknown[1017]: fetched base config from "system" May 15 11:55:51.836984 ignition[1017]: fetch: fetch passed May 15 11:55:51.836275 unknown[1017]: fetched user config from "azure" May 15 11:55:51.837214 ignition[1017]: Ignition finished successfully May 15 11:55:51.838966 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 15 11:55:51.844818 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 15 11:55:51.877964 ignition[1024]: Ignition 2.21.0 May 15 11:55:51.880514 ignition[1024]: Stage: kargs May 15 11:55:51.880661 ignition[1024]: no configs at "/usr/lib/ignition/base.d" May 15 11:55:51.880668 ignition[1024]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 15 11:55:51.884612 systemd-networkd[1006]: enP25826s1: Gained IPv6LL May 15 11:55:51.881157 ignition[1024]: kargs: kargs passed May 15 11:55:51.894703 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 15 11:55:51.881196 ignition[1024]: Ignition finished successfully May 15 11:55:51.899943 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 15 11:55:51.929156 ignition[1031]: Ignition 2.21.0 May 15 11:55:51.931557 ignition[1031]: Stage: disks May 15 11:55:51.931870 ignition[1031]: no configs at "/usr/lib/ignition/base.d" May 15 11:55:51.934582 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 15 11:55:51.931880 ignition[1031]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 15 11:55:51.939205 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 15 11:55:51.932805 ignition[1031]: disks: disks passed May 15 11:55:51.947132 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 11:55:51.932842 ignition[1031]: Ignition finished successfully May 15 11:55:51.956177 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 11:55:51.964704 systemd[1]: Reached target sysinit.target - System Initialization. May 15 11:55:51.973419 systemd[1]: Reached target basic.target - Basic System. May 15 11:55:51.980291 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 15 11:55:52.012678 systemd-networkd[1006]: eth0: Gained IPv6LL May 15 11:55:52.062079 systemd-fsck[1039]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks May 15 11:55:52.069992 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 15 11:55:52.077862 systemd[1]: Mounting sysroot.mount - /sysroot... May 15 11:55:52.258456 kernel: EXT4-fs (sda9): mounted filesystem 7753583f-75f7-43aa-89cb-b5e5a7f28ed5 r/w with ordered data mode. Quota mode: none. May 15 11:55:52.258899 systemd[1]: Mounted sysroot.mount - /sysroot. May 15 11:55:52.265207 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 15 11:55:52.286476 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 11:55:52.299805 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 15 11:55:52.308248 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 15 11:55:52.327655 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 (8:6) scanned by mount (1053) May 15 11:55:52.327671 kernel: BTRFS info (device sda6): first mount of filesystem 3936141b-01f3-466e-a92a-4f7ff09b25a9 May 15 11:55:52.322810 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 11:55:52.343919 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 15 11:55:52.343936 kernel: BTRFS info (device sda6): using free-space-tree May 15 11:55:52.322842 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 15 11:55:52.354479 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 15 11:55:52.358959 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 15 11:55:52.370469 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 11:55:52.979705 coreos-metadata[1055]: May 15 11:55:52.979 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 15 11:55:52.985555 coreos-metadata[1055]: May 15 11:55:52.984 INFO Fetch successful May 15 11:55:52.985555 coreos-metadata[1055]: May 15 11:55:52.984 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 May 15 11:55:52.997042 coreos-metadata[1055]: May 15 11:55:52.997 INFO Fetch successful May 15 11:55:52.997042 coreos-metadata[1055]: May 15 11:55:52.997 INFO wrote hostname ci-4334.0.0-a-59732b8df3 to /sysroot/etc/hostname May 15 11:55:53.001272 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 15 11:55:53.255453 initrd-setup-root[1084]: cut: /sysroot/etc/passwd: No such file or directory May 15 11:55:53.290546 initrd-setup-root[1091]: cut: /sysroot/etc/group: No such file or directory May 15 11:55:53.295064 initrd-setup-root[1098]: cut: /sysroot/etc/shadow: No such file or directory May 15 11:55:53.301202 initrd-setup-root[1105]: cut: /sysroot/etc/gshadow: No such file or directory May 15 11:55:53.963884 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 15 11:55:53.969315 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 15 11:55:53.983929 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 15 11:55:53.991957 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 15 11:55:54.002608 kernel: BTRFS info (device sda6): last unmount of filesystem 3936141b-01f3-466e-a92a-4f7ff09b25a9 May 15 11:55:54.013965 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 15 11:55:54.023038 ignition[1174]: INFO : Ignition 2.21.0 May 15 11:55:54.023038 ignition[1174]: INFO : Stage: mount May 15 11:55:54.023038 ignition[1174]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 11:55:54.023038 ignition[1174]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 15 11:55:54.023038 ignition[1174]: INFO : mount: mount passed May 15 11:55:54.023038 ignition[1174]: INFO : Ignition finished successfully May 15 11:55:54.024103 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 15 11:55:54.030030 systemd[1]: Starting ignition-files.service - Ignition (files)... May 15 11:55:54.054543 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 11:55:54.074443 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 (8:6) scanned by mount (1185) May 15 11:55:54.083627 kernel: BTRFS info (device sda6): first mount of filesystem 3936141b-01f3-466e-a92a-4f7ff09b25a9 May 15 11:55:54.083651 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 15 11:55:54.086912 kernel: BTRFS info (device sda6): using free-space-tree May 15 11:55:54.089022 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 11:55:54.112294 ignition[1203]: INFO : Ignition 2.21.0 May 15 11:55:54.115370 ignition[1203]: INFO : Stage: files May 15 11:55:54.115370 ignition[1203]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 11:55:54.115370 ignition[1203]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 15 11:55:54.115370 ignition[1203]: DEBUG : files: compiled without relabeling support, skipping May 15 11:55:54.131349 ignition[1203]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 11:55:54.131349 ignition[1203]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 11:55:54.169270 ignition[1203]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 11:55:54.174538 ignition[1203]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 11:55:54.179532 ignition[1203]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 11:55:54.174568 unknown[1203]: wrote ssh authorized keys file for user: core May 15 11:55:54.205496 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 15 11:55:54.213599 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 15 11:55:54.273999 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 11:55:54.575927 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 15 11:55:54.575927 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 15 11:55:54.590250 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 15 11:55:54.590250 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 11:55:54.590250 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 11:55:54.590250 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 11:55:54.590250 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 11:55:54.590250 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 11:55:54.590250 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 11:55:54.638410 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 11:55:54.638410 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 11:55:54.638410 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 15 11:55:54.638410 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 15 11:55:54.638410 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 15 11:55:54.638410 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 May 15 11:55:55.054756 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 15 11:55:55.220618 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 15 11:55:55.220618 ignition[1203]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 15 11:55:55.256655 ignition[1203]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 11:55:55.269513 ignition[1203]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 11:55:55.269513 ignition[1203]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 15 11:55:55.269513 ignition[1203]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 15 11:55:55.301039 ignition[1203]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 15 11:55:55.301039 ignition[1203]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 11:55:55.301039 ignition[1203]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 11:55:55.301039 ignition[1203]: INFO : files: files passed May 15 11:55:55.301039 ignition[1203]: INFO : Ignition finished successfully May 15 11:55:55.278591 systemd[1]: Finished ignition-files.service - Ignition (files). May 15 11:55:55.289894 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 15 11:55:55.317040 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 15 11:55:55.331659 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 11:55:55.358148 initrd-setup-root-after-ignition[1231]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 11:55:55.358148 initrd-setup-root-after-ignition[1231]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 15 11:55:55.335706 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 15 11:55:55.385919 initrd-setup-root-after-ignition[1235]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 11:55:55.355184 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 11:55:55.363686 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 15 11:55:55.374322 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 15 11:55:55.417067 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 11:55:55.419480 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 15 11:55:55.425606 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 15 11:55:55.433852 systemd[1]: Reached target initrd.target - Initrd Default Target. May 15 11:55:55.441412 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 15 11:55:55.441939 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 15 11:55:55.476219 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 11:55:55.482202 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 15 11:55:55.504476 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 15 11:55:55.509154 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 11:55:55.517874 systemd[1]: Stopped target timers.target - Timer Units. May 15 11:55:55.525477 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 11:55:55.525602 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 11:55:55.536562 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 15 11:55:55.544656 systemd[1]: Stopped target basic.target - Basic System. May 15 11:55:55.551823 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 15 11:55:55.558948 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 15 11:55:55.567638 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 15 11:55:55.576046 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 15 11:55:55.584349 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 15 11:55:55.592132 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 15 11:55:55.600383 systemd[1]: Stopped target sysinit.target - System Initialization. May 15 11:55:55.608665 systemd[1]: Stopped target local-fs.target - Local File Systems. May 15 11:55:55.616170 systemd[1]: Stopped target swap.target - Swaps. May 15 11:55:55.622562 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 11:55:55.622702 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 15 11:55:55.633202 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 15 11:55:55.641140 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 11:55:55.649952 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 15 11:55:55.654510 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 11:55:55.659878 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 11:55:55.660000 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 15 11:55:55.672909 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 11:55:55.673034 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 11:55:55.681774 systemd[1]: ignition-files.service: Deactivated successfully. May 15 11:55:55.681874 systemd[1]: Stopped ignition-files.service - Ignition (files). May 15 11:55:55.690236 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 15 11:55:55.690332 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 15 11:55:55.702538 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 15 11:55:55.713779 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 11:55:55.713949 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 15 11:55:55.769245 ignition[1255]: INFO : Ignition 2.21.0 May 15 11:55:55.769245 ignition[1255]: INFO : Stage: umount May 15 11:55:55.769245 ignition[1255]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 11:55:55.769245 ignition[1255]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 15 11:55:55.769245 ignition[1255]: INFO : umount: umount passed May 15 11:55:55.769245 ignition[1255]: INFO : Ignition finished successfully May 15 11:55:55.725821 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 15 11:55:55.737680 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 11:55:55.737798 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 15 11:55:55.743085 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 11:55:55.743159 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 15 11:55:55.762817 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 11:55:55.762885 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 15 11:55:55.774250 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 11:55:55.774640 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 11:55:55.774708 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 15 11:55:55.779986 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 11:55:55.780057 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 15 11:55:55.784846 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 11:55:55.784888 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 15 11:55:55.792791 systemd[1]: ignition-fetch.service: Deactivated successfully. May 15 11:55:55.792824 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 15 11:55:55.796966 systemd[1]: Stopped target network.target - Network. May 15 11:55:55.806086 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 11:55:55.806130 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 15 11:55:55.813826 systemd[1]: Stopped target paths.target - Path Units. May 15 11:55:55.820801 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 11:55:55.824326 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 11:55:55.829255 systemd[1]: Stopped target slices.target - Slice Units. May 15 11:55:55.836341 systemd[1]: Stopped target sockets.target - Socket Units. May 15 11:55:55.844505 systemd[1]: iscsid.socket: Deactivated successfully. May 15 11:55:55.844537 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 15 11:55:55.852724 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 11:55:55.852752 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 11:55:55.861413 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 11:55:55.861456 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 15 11:55:55.869911 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 15 11:55:55.869941 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 15 11:55:55.879054 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 15 11:55:55.886772 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 15 11:55:55.901043 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 11:55:55.901136 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 15 11:55:55.915425 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 15 11:55:56.086230 kernel: hv_netvsc 000d3ac5-fcd4-000d-3ac5-fcd4000d3ac5 eth0: Data path switched from VF: enP25826s1 May 15 11:55:55.915678 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 11:55:55.915756 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 15 11:55:55.927239 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 15 11:55:55.927672 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 15 11:55:55.935165 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 11:55:55.935199 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 15 11:55:55.944502 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 15 11:55:55.957387 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 11:55:55.957444 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 11:55:55.967633 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 11:55:55.967674 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 11:55:55.975281 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 11:55:55.975312 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 15 11:55:55.979942 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 15 11:55:55.979973 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 11:55:55.994793 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 11:55:56.002963 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 11:55:56.003013 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 15 11:55:56.026705 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 11:55:56.026865 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 11:55:56.033899 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 11:55:56.033929 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 15 11:55:56.042540 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 11:55:56.042563 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 15 11:55:56.051168 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 11:55:56.051215 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 15 11:55:56.063544 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 11:55:56.063588 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 15 11:55:56.081222 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 11:55:56.081262 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 11:55:56.262136 systemd-journald[224]: Received SIGTERM from PID 1 (systemd). May 15 11:55:56.098365 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 15 11:55:56.113194 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 15 11:55:56.113255 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 15 11:55:56.121451 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 11:55:56.121489 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 11:55:56.129852 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 11:55:56.129892 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 11:55:56.139452 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 15 11:55:56.139493 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 15 11:55:56.139544 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 15 11:55:56.139775 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 11:55:56.139895 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 15 11:55:56.153123 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 11:55:56.153219 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 15 11:55:56.158199 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 11:55:56.158273 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 15 11:55:56.163069 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 11:55:56.163148 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 15 11:55:56.170092 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 15 11:55:56.178239 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 15 11:55:56.198840 systemd[1]: Switching root. May 15 11:55:56.346201 systemd-journald[224]: Journal stopped May 15 11:56:00.438060 kernel: SELinux: policy capability network_peer_controls=1 May 15 11:56:00.438077 kernel: SELinux: policy capability open_perms=1 May 15 11:56:00.438084 kernel: SELinux: policy capability extended_socket_class=1 May 15 11:56:00.438089 kernel: SELinux: policy capability always_check_network=0 May 15 11:56:00.438097 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 11:56:00.438102 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 11:56:00.438108 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 11:56:00.438113 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 11:56:00.438119 kernel: SELinux: policy capability userspace_initial_context=0 May 15 11:56:00.438125 systemd[1]: Successfully loaded SELinux policy in 148.088ms. May 15 11:56:00.438132 kernel: audit: type=1403 audit(1747310157.223:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 11:56:00.438138 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.618ms. May 15 11:56:00.438144 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 11:56:00.438150 systemd[1]: Detected virtualization microsoft. May 15 11:56:00.438157 systemd[1]: Detected architecture arm64. May 15 11:56:00.438163 systemd[1]: Detected first boot. May 15 11:56:00.438169 systemd[1]: Hostname set to . May 15 11:56:00.438175 systemd[1]: Initializing machine ID from random generator. May 15 11:56:00.438181 zram_generator::config[1297]: No configuration found. May 15 11:56:00.438187 kernel: NET: Registered PF_VSOCK protocol family May 15 11:56:00.438193 systemd[1]: Populated /etc with preset unit settings. May 15 11:56:00.438199 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 15 11:56:00.438206 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 11:56:00.438211 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 15 11:56:00.438217 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 11:56:00.438224 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 15 11:56:00.438230 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 15 11:56:00.438236 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 15 11:56:00.438242 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 15 11:56:00.438249 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 15 11:56:00.438255 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 15 11:56:00.438261 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 15 11:56:00.438267 systemd[1]: Created slice user.slice - User and Session Slice. May 15 11:56:00.438273 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 11:56:00.438279 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 11:56:00.438285 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 15 11:56:00.438291 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 15 11:56:00.438297 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 15 11:56:00.438303 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 11:56:00.438310 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 15 11:56:00.438317 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 11:56:00.438323 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 11:56:00.438329 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 15 11:56:00.438335 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 15 11:56:00.438341 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 15 11:56:00.438348 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 15 11:56:00.438354 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 11:56:00.438360 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 11:56:00.438366 systemd[1]: Reached target slices.target - Slice Units. May 15 11:56:00.438372 systemd[1]: Reached target swap.target - Swaps. May 15 11:56:00.438378 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 15 11:56:00.438384 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 15 11:56:00.438392 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 15 11:56:00.438398 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 11:56:00.438404 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 11:56:00.438410 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 11:56:00.438417 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 15 11:56:00.438422 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 15 11:56:00.438430 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 15 11:56:00.439195 systemd[1]: Mounting media.mount - External Media Directory... May 15 11:56:00.439214 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 15 11:56:00.439222 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 15 11:56:00.439230 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 15 11:56:00.439237 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 11:56:00.439243 systemd[1]: Reached target machines.target - Containers. May 15 11:56:00.439250 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 15 11:56:00.439260 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 11:56:00.439266 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 11:56:00.439273 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 15 11:56:00.439279 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 11:56:00.439285 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 11:56:00.439291 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 11:56:00.439297 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 15 11:56:00.439303 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 11:56:00.439310 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 11:56:00.439318 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 11:56:00.439324 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 15 11:56:00.439330 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 11:56:00.439336 systemd[1]: Stopped systemd-fsck-usr.service. May 15 11:56:00.439343 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 11:56:00.439349 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 11:56:00.439355 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 11:56:00.439362 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 11:56:00.439370 kernel: loop: module loaded May 15 11:56:00.439387 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 15 11:56:00.439394 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 15 11:56:00.439418 systemd-journald[1391]: Collecting audit messages is disabled. May 15 11:56:00.439441 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 11:56:00.439448 kernel: ACPI: bus type drm_connector registered May 15 11:56:00.439454 systemd-journald[1391]: Journal started May 15 11:56:00.439468 systemd-journald[1391]: Runtime Journal (/run/log/journal/df10f8db5be14ed6a77164a2ab711e21) is 8M, max 78.5M, 70.5M free. May 15 11:55:59.770701 systemd[1]: Queued start job for default target multi-user.target. May 15 11:55:59.776856 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 15 11:55:59.777224 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 11:55:59.778620 systemd[1]: systemd-journald.service: Consumed 2.259s CPU time. May 15 11:56:00.451167 systemd[1]: verity-setup.service: Deactivated successfully. May 15 11:56:00.451196 systemd[1]: Stopped verity-setup.service. May 15 11:56:00.462754 systemd[1]: Started systemd-journald.service - Journal Service. May 15 11:56:00.463393 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 15 11:56:00.467398 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 15 11:56:00.474450 kernel: fuse: init (API version 7.41) May 15 11:56:00.474526 systemd[1]: Mounted media.mount - External Media Directory. May 15 11:56:00.479470 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 15 11:56:00.484198 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 15 11:56:00.488628 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 15 11:56:00.492329 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 15 11:56:00.497261 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 11:56:00.502060 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 11:56:00.502184 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 15 11:56:00.506663 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 11:56:00.506775 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 11:56:00.511360 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 11:56:00.511499 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 11:56:00.515720 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 11:56:00.515824 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 11:56:00.521029 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 11:56:00.521131 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 15 11:56:00.526310 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 11:56:00.526418 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 11:56:00.530665 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 11:56:00.534887 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 11:56:00.539600 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 15 11:56:00.544317 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 15 11:56:00.549253 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 11:56:00.562271 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 11:56:00.568538 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 15 11:56:00.576521 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 15 11:56:00.580732 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 11:56:00.580759 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 11:56:00.585232 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 15 11:56:00.590802 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 15 11:56:00.594566 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 11:56:00.617136 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 15 11:56:00.628545 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 15 11:56:00.633716 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 11:56:00.634455 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 15 11:56:00.639051 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 11:56:00.648532 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 11:56:00.655195 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 15 11:56:00.661617 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 15 11:56:00.669420 systemd-journald[1391]: Time spent on flushing to /var/log/journal/df10f8db5be14ed6a77164a2ab711e21 is 80.515ms for 935 entries. May 15 11:56:00.669420 systemd-journald[1391]: System Journal (/var/log/journal/df10f8db5be14ed6a77164a2ab711e21) is 11.8M, max 2.6G, 2.6G free. May 15 11:56:00.820937 systemd-journald[1391]: Received client request to flush runtime journal. May 15 11:56:00.820978 systemd-journald[1391]: /var/log/journal/df10f8db5be14ed6a77164a2ab711e21/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. May 15 11:56:00.820996 systemd-journald[1391]: Rotating system journal. May 15 11:56:00.821012 kernel: loop0: detected capacity change from 0 to 138376 May 15 11:56:00.670412 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 15 11:56:00.682432 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 15 11:56:00.687776 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 15 11:56:00.693343 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 15 11:56:00.709298 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 15 11:56:00.756042 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 15 11:56:00.764211 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 11:56:00.771911 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 11:56:00.823469 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 15 11:56:00.853750 systemd-tmpfiles[1446]: ACLs are not supported, ignoring. May 15 11:56:00.853763 systemd-tmpfiles[1446]: ACLs are not supported, ignoring. May 15 11:56:00.856634 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 11:56:00.910047 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 11:56:00.910985 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 15 11:56:01.116459 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 11:56:01.159451 kernel: loop1: detected capacity change from 0 to 28640 May 15 11:56:01.506470 kernel: loop2: detected capacity change from 0 to 201592 May 15 11:56:01.547457 kernel: loop3: detected capacity change from 0 to 107312 May 15 11:56:01.765502 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 15 11:56:01.772017 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 11:56:01.801492 systemd-udevd[1460]: Using default interface naming scheme 'v255'. May 15 11:56:01.928479 kernel: loop4: detected capacity change from 0 to 138376 May 15 11:56:01.936456 kernel: loop5: detected capacity change from 0 to 28640 May 15 11:56:01.943448 kernel: loop6: detected capacity change from 0 to 201592 May 15 11:56:01.950449 kernel: loop7: detected capacity change from 0 to 107312 May 15 11:56:01.952185 (sd-merge)[1462]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. May 15 11:56:01.952552 (sd-merge)[1462]: Merged extensions into '/usr'. May 15 11:56:01.954737 systemd[1]: Reload requested from client PID 1436 ('systemd-sysext') (unit systemd-sysext.service)... May 15 11:56:01.954824 systemd[1]: Reloading... May 15 11:56:02.010477 zram_generator::config[1493]: No configuration found. May 15 11:56:02.137083 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 11:56:02.172618 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#148 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 May 15 11:56:02.254319 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 15 11:56:02.254418 systemd[1]: Reloading finished in 299 ms. May 15 11:56:02.261934 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 11:56:02.273467 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 15 11:56:02.290297 kernel: mousedev: PS/2 mouse device common for all mice May 15 11:56:02.290355 kernel: hv_vmbus: registering driver hv_balloon May 15 11:56:02.290366 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 May 15 11:56:02.296293 kernel: hv_balloon: Memory hot add disabled on ARM64 May 15 11:56:02.294372 systemd[1]: Starting ensure-sysext.service... May 15 11:56:02.305905 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 11:56:02.315350 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 11:56:02.340339 kernel: hv_vmbus: registering driver hyperv_fb May 15 11:56:02.340391 kernel: hyperv_fb: Synthvid Version major 3, minor 5 May 15 11:56:02.340485 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 May 15 11:56:02.355177 kernel: Console: switching to colour dummy device 80x25 May 15 11:56:02.355239 kernel: Console: switching to colour frame buffer device 128x48 May 15 11:56:02.357072 systemd-tmpfiles[1607]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 15 11:56:02.357096 systemd-tmpfiles[1607]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 15 11:56:02.357277 systemd-tmpfiles[1607]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 11:56:02.357408 systemd-tmpfiles[1607]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 15 11:56:02.357843 systemd-tmpfiles[1607]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 11:56:02.357976 systemd-tmpfiles[1607]: ACLs are not supported, ignoring. May 15 11:56:02.358002 systemd-tmpfiles[1607]: ACLs are not supported, ignoring. May 15 11:56:02.359747 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 11:56:02.365849 systemd[1]: Reload requested from client PID 1604 ('systemctl') (unit ensure-sysext.service)... May 15 11:56:02.365863 systemd[1]: Reloading... May 15 11:56:02.377690 systemd-tmpfiles[1607]: Detected autofs mount point /boot during canonicalization of boot. May 15 11:56:02.377697 systemd-tmpfiles[1607]: Skipping /boot May 15 11:56:02.385141 systemd-tmpfiles[1607]: Detected autofs mount point /boot during canonicalization of boot. May 15 11:56:02.385155 systemd-tmpfiles[1607]: Skipping /boot May 15 11:56:02.426538 zram_generator::config[1640]: No configuration found. May 15 11:56:02.521679 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 11:56:02.524466 kernel: MACsec IEEE 802.1AE May 15 11:56:02.609182 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 15 11:56:02.614976 systemd[1]: Reloading finished in 248 ms. May 15 11:56:02.637569 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 11:56:02.667768 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 11:56:02.687419 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 15 11:56:02.693783 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 11:56:02.694637 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 11:56:02.700613 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 11:56:02.706624 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 11:56:02.710893 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 11:56:02.716657 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 15 11:56:02.722839 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 11:56:02.723686 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 15 11:56:02.733991 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 11:56:02.738745 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 15 11:56:02.751463 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 15 11:56:02.758310 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 11:56:02.761675 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 11:56:02.766309 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 11:56:02.766466 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 11:56:02.771677 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 15 11:56:02.772011 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 11:56:02.772135 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 11:56:02.779113 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 11:56:02.779548 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 11:56:02.787826 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 15 11:56:02.800082 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 15 11:56:02.812012 systemd[1]: Finished ensure-sysext.service. May 15 11:56:02.815814 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 15 11:56:02.821977 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 11:56:02.822846 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 11:56:02.830221 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 11:56:02.835504 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 11:56:02.842986 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 11:56:02.847809 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 11:56:02.847844 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 11:56:02.847883 systemd[1]: Reached target time-set.target - System Time Set. May 15 11:56:02.854548 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 11:56:02.859935 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 15 11:56:02.867420 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 11:56:02.867574 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 11:56:02.872046 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 11:56:02.872151 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 11:56:02.876221 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 11:56:02.876326 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 11:56:02.880970 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 11:56:02.881070 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 11:56:02.888146 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 11:56:02.888206 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 11:56:02.915639 augenrules[1812]: No rules May 15 11:56:02.916749 systemd[1]: audit-rules.service: Deactivated successfully. May 15 11:56:02.919313 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 11:56:02.923991 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 11:56:02.979032 systemd-resolved[1762]: Positive Trust Anchors: May 15 11:56:02.979043 systemd-resolved[1762]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 11:56:02.979062 systemd-resolved[1762]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 11:56:02.981681 systemd-resolved[1762]: Using system hostname 'ci-4334.0.0-a-59732b8df3'. May 15 11:56:03.008968 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 11:56:03.013324 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 11:56:03.039185 systemd-networkd[1606]: lo: Link UP May 15 11:56:03.039192 systemd-networkd[1606]: lo: Gained carrier May 15 11:56:03.041520 systemd-networkd[1606]: Enumeration completed May 15 11:56:03.041606 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 11:56:03.042041 systemd-networkd[1606]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 11:56:03.042107 systemd-networkd[1606]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 11:56:03.045978 systemd[1]: Reached target network.target - Network. May 15 11:56:03.050511 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 15 11:56:03.056297 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 15 11:56:03.102486 kernel: mlx5_core 64e2:00:02.0 enP25826s1: Link up May 15 11:56:03.122453 kernel: hv_netvsc 000d3ac5-fcd4-000d-3ac5-fcd4000d3ac5 eth0: Data path switched to VF: enP25826s1 May 15 11:56:03.124053 systemd-networkd[1606]: enP25826s1: Link UP May 15 11:56:03.124168 systemd-networkd[1606]: eth0: Link UP May 15 11:56:03.124171 systemd-networkd[1606]: eth0: Gained carrier May 15 11:56:03.124184 systemd-networkd[1606]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 11:56:03.126498 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 15 11:56:03.132719 systemd-networkd[1606]: enP25826s1: Gained carrier May 15 11:56:03.142484 systemd-networkd[1606]: eth0: DHCPv4 address 10.200.20.23/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 15 11:56:03.346074 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 15 11:56:03.351345 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 11:56:04.364644 systemd-networkd[1606]: enP25826s1: Gained IPv6LL May 15 11:56:04.940581 systemd-networkd[1606]: eth0: Gained IPv6LL May 15 11:56:04.942418 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 15 11:56:04.948288 systemd[1]: Reached target network-online.target - Network is Online. May 15 11:56:06.882224 ldconfig[1431]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 11:56:06.892389 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 15 11:56:06.898491 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 15 11:56:06.918871 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 15 11:56:06.923895 systemd[1]: Reached target sysinit.target - System Initialization. May 15 11:56:06.928752 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 15 11:56:06.934231 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 15 11:56:06.939930 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 15 11:56:06.944628 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 15 11:56:06.950155 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 15 11:56:06.955703 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 11:56:06.955727 systemd[1]: Reached target paths.target - Path Units. May 15 11:56:06.959710 systemd[1]: Reached target timers.target - Timer Units. May 15 11:56:06.964748 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 15 11:56:06.970842 systemd[1]: Starting docker.socket - Docker Socket for the API... May 15 11:56:06.977004 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 15 11:56:06.982756 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 15 11:56:06.988411 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 15 11:56:06.994741 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 15 11:56:06.999753 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 15 11:56:07.005323 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 15 11:56:07.010049 systemd[1]: Reached target sockets.target - Socket Units. May 15 11:56:07.014166 systemd[1]: Reached target basic.target - Basic System. May 15 11:56:07.018473 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 15 11:56:07.018493 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 15 11:56:07.020134 systemd[1]: Starting chronyd.service - NTP client/server... May 15 11:56:07.032522 systemd[1]: Starting containerd.service - containerd container runtime... May 15 11:56:07.039595 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 15 11:56:07.044739 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 15 11:56:07.051617 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 15 11:56:07.062593 (chronyd)[1831]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS May 15 11:56:07.065533 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 15 11:56:07.070648 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 15 11:56:07.072456 jq[1839]: false May 15 11:56:07.075341 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 11:56:07.078193 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 11:56:07.083536 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 15 11:56:07.088405 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 15 11:56:07.094559 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 15 11:56:07.101557 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 15 11:56:07.110544 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 15 11:56:07.115789 chronyd[1852]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) May 15 11:56:07.120561 systemd[1]: Starting systemd-logind.service - User Login Management... May 15 11:56:07.120777 chronyd[1852]: Timezone right/UTC failed leap second check, ignoring May 15 11:56:07.124945 chronyd[1852]: Loaded seccomp filter (level 2) May 15 11:56:07.126897 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 11:56:07.127192 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 11:56:07.128923 systemd[1]: Starting update-engine.service - Update Engine... May 15 11:56:07.139530 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 15 11:56:07.146249 jq[1860]: true May 15 11:56:07.148194 systemd[1]: Started chronyd.service - NTP client/server. May 15 11:56:07.152269 extend-filesystems[1840]: Found loop4 May 15 11:56:07.156625 extend-filesystems[1840]: Found loop5 May 15 11:56:07.156625 extend-filesystems[1840]: Found loop6 May 15 11:56:07.156625 extend-filesystems[1840]: Found loop7 May 15 11:56:07.156625 extend-filesystems[1840]: Found sda May 15 11:56:07.156625 extend-filesystems[1840]: Found sda1 May 15 11:56:07.156625 extend-filesystems[1840]: Found sda2 May 15 11:56:07.156625 extend-filesystems[1840]: Found sda3 May 15 11:56:07.156625 extend-filesystems[1840]: Found usr May 15 11:56:07.156625 extend-filesystems[1840]: Found sda4 May 15 11:56:07.156625 extend-filesystems[1840]: Found sda6 May 15 11:56:07.156625 extend-filesystems[1840]: Found sda7 May 15 11:56:07.156625 extend-filesystems[1840]: Found sda9 May 15 11:56:07.156625 extend-filesystems[1840]: Checking size of /dev/sda9 May 15 11:56:07.155771 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 15 11:56:07.301423 update_engine[1858]: I20250515 11:56:07.208057 1858 main.cc:92] Flatcar Update Engine starting May 15 11:56:07.301613 extend-filesystems[1840]: Old size kept for /dev/sda9 May 15 11:56:07.301613 extend-filesystems[1840]: Found sr0 May 15 11:56:07.166689 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 11:56:07.166908 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 15 11:56:07.307449 tar[1868]: linux-arm64/LICENSE May 15 11:56:07.307449 tar[1868]: linux-arm64/helm May 15 11:56:07.168030 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 11:56:07.168386 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 15 11:56:07.177234 systemd[1]: motdgen.service: Deactivated successfully. May 15 11:56:07.307788 jq[1872]: true May 15 11:56:07.180070 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 15 11:56:07.198191 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 15 11:56:07.228774 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 11:56:07.228952 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 15 11:56:07.229003 (ntainerd)[1874]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 15 11:56:07.265161 systemd-logind[1854]: New seat seat0. May 15 11:56:07.266102 systemd-logind[1854]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) May 15 11:56:07.266276 systemd[1]: Started systemd-logind.service - User Login Management. May 15 11:56:07.417824 dbus-daemon[1834]: [system] SELinux support is enabled May 15 11:56:07.418129 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 15 11:56:07.426876 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 11:56:07.426901 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 15 11:56:07.434977 update_engine[1858]: I20250515 11:56:07.434883 1858 update_check_scheduler.cc:74] Next update check in 2m41s May 15 11:56:07.435896 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 11:56:07.435921 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 15 11:56:07.445057 dbus-daemon[1834]: [system] Successfully activated service 'org.freedesktop.systemd1' May 15 11:56:07.445197 systemd[1]: Started update-engine.service - Update Engine. May 15 11:56:07.453014 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 15 11:56:07.458845 bash[1926]: Updated "/home/core/.ssh/authorized_keys" May 15 11:56:07.460136 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 15 11:56:07.469369 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 15 11:56:07.496077 sshd_keygen[1856]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 11:56:07.511629 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 15 11:56:07.518034 coreos-metadata[1833]: May 15 11:56:07.518 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 15 11:56:07.520173 systemd[1]: Starting issuegen.service - Generate /run/issue... May 15 11:56:07.524072 coreos-metadata[1833]: May 15 11:56:07.524 INFO Fetch successful May 15 11:56:07.525606 coreos-metadata[1833]: May 15 11:56:07.525 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 May 15 11:56:07.527754 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... May 15 11:56:07.533680 coreos-metadata[1833]: May 15 11:56:07.532 INFO Fetch successful May 15 11:56:07.533680 coreos-metadata[1833]: May 15 11:56:07.532 INFO Fetching http://168.63.129.16/machine/2b799036-8771-4807-856b-af003f39c6df/2cdc53c1%2D9327%2D44d6%2D8cb5%2D37a5bd36fb53.%5Fci%2D4334.0.0%2Da%2D59732b8df3?comp=config&type=sharedConfig&incarnation=1: Attempt #1 May 15 11:56:07.536074 coreos-metadata[1833]: May 15 11:56:07.535 INFO Fetch successful May 15 11:56:07.536074 coreos-metadata[1833]: May 15 11:56:07.535 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 May 15 11:56:07.545072 coreos-metadata[1833]: May 15 11:56:07.544 INFO Fetch successful May 15 11:56:07.550715 systemd[1]: issuegen.service: Deactivated successfully. May 15 11:56:07.551319 systemd[1]: Finished issuegen.service - Generate /run/issue. May 15 11:56:07.563712 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 15 11:56:07.580045 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. May 15 11:56:07.595508 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 15 11:56:07.605545 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 15 11:56:07.654954 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 15 11:56:07.664241 systemd[1]: Started getty@tty1.service - Getty on tty1. May 15 11:56:07.675539 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 15 11:56:07.685065 systemd[1]: Reached target getty.target - Login Prompts. May 15 11:56:07.794105 tar[1868]: linux-arm64/README.md May 15 11:56:07.808858 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 15 11:56:07.851401 locksmithd[1973]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 11:56:07.877774 containerd[1874]: time="2025-05-15T11:56:07Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 15 11:56:07.879456 containerd[1874]: time="2025-05-15T11:56:07.879215264Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 15 11:56:07.886341 containerd[1874]: time="2025-05-15T11:56:07.886310840Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.088µs" May 15 11:56:07.886341 containerd[1874]: time="2025-05-15T11:56:07.886336296Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 15 11:56:07.886423 containerd[1874]: time="2025-05-15T11:56:07.886351232Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 15 11:56:07.887725 containerd[1874]: time="2025-05-15T11:56:07.887698160Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 15 11:56:07.888260 containerd[1874]: time="2025-05-15T11:56:07.887766552Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 15 11:56:07.888260 containerd[1874]: time="2025-05-15T11:56:07.887811488Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 15 11:56:07.888260 containerd[1874]: time="2025-05-15T11:56:07.887881936Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 15 11:56:07.888260 containerd[1874]: time="2025-05-15T11:56:07.887890344Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 15 11:56:07.888260 containerd[1874]: time="2025-05-15T11:56:07.888046856Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 15 11:56:07.888260 containerd[1874]: time="2025-05-15T11:56:07.888056712Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 15 11:56:07.888260 containerd[1874]: time="2025-05-15T11:56:07.888063856Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 15 11:56:07.888260 containerd[1874]: time="2025-05-15T11:56:07.888068616Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 15 11:56:07.888260 containerd[1874]: time="2025-05-15T11:56:07.888129208Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 15 11:56:07.888484 containerd[1874]: time="2025-05-15T11:56:07.888466288Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 15 11:56:07.888566 containerd[1874]: time="2025-05-15T11:56:07.888553680Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 15 11:56:07.888608 containerd[1874]: time="2025-05-15T11:56:07.888595576Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 15 11:56:07.888665 containerd[1874]: time="2025-05-15T11:56:07.888654976Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 15 11:56:07.888881 containerd[1874]: time="2025-05-15T11:56:07.888867024Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 15 11:56:07.889008 containerd[1874]: time="2025-05-15T11:56:07.888992920Z" level=info msg="metadata content store policy set" policy=shared May 15 11:56:07.902588 containerd[1874]: time="2025-05-15T11:56:07.902565840Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 15 11:56:07.902683 containerd[1874]: time="2025-05-15T11:56:07.902672776Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 15 11:56:07.902751 containerd[1874]: time="2025-05-15T11:56:07.902743136Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 15 11:56:07.902796 containerd[1874]: time="2025-05-15T11:56:07.902784952Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 15 11:56:07.902842 containerd[1874]: time="2025-05-15T11:56:07.902831960Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 15 11:56:07.902883 containerd[1874]: time="2025-05-15T11:56:07.902872624Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 15 11:56:07.903269 containerd[1874]: time="2025-05-15T11:56:07.902964240Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 15 11:56:07.903269 containerd[1874]: time="2025-05-15T11:56:07.902980912Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 15 11:56:07.903269 containerd[1874]: time="2025-05-15T11:56:07.902988472Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 15 11:56:07.903269 containerd[1874]: time="2025-05-15T11:56:07.902995944Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 15 11:56:07.903269 containerd[1874]: time="2025-05-15T11:56:07.903001640Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 15 11:56:07.903269 containerd[1874]: time="2025-05-15T11:56:07.903009648Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 15 11:56:07.903269 containerd[1874]: time="2025-05-15T11:56:07.903107352Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 15 11:56:07.903269 containerd[1874]: time="2025-05-15T11:56:07.903120872Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 15 11:56:07.903269 containerd[1874]: time="2025-05-15T11:56:07.903140000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 15 11:56:07.903269 containerd[1874]: time="2025-05-15T11:56:07.903146592Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 15 11:56:07.903269 containerd[1874]: time="2025-05-15T11:56:07.903152760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 15 11:56:07.903269 containerd[1874]: time="2025-05-15T11:56:07.903159720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 15 11:56:07.903269 containerd[1874]: time="2025-05-15T11:56:07.903166584Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 15 11:56:07.903269 containerd[1874]: time="2025-05-15T11:56:07.903172536Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 15 11:56:07.903269 containerd[1874]: time="2025-05-15T11:56:07.903179056Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 15 11:56:07.903503 containerd[1874]: time="2025-05-15T11:56:07.903184688Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 15 11:56:07.903503 containerd[1874]: time="2025-05-15T11:56:07.903191568Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 15 11:56:07.903503 containerd[1874]: time="2025-05-15T11:56:07.903238480Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 15 11:56:07.903503 containerd[1874]: time="2025-05-15T11:56:07.903248040Z" level=info msg="Start snapshots syncer" May 15 11:56:07.903622 containerd[1874]: time="2025-05-15T11:56:07.903601312Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 15 11:56:07.903859 containerd[1874]: time="2025-05-15T11:56:07.903829472Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 15 11:56:07.903983 containerd[1874]: time="2025-05-15T11:56:07.903968992Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 15 11:56:07.904095 containerd[1874]: time="2025-05-15T11:56:07.904080848Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 15 11:56:07.904256 containerd[1874]: time="2025-05-15T11:56:07.904238120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 15 11:56:07.904320 containerd[1874]: time="2025-05-15T11:56:07.904308720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 15 11:56:07.904479 containerd[1874]: time="2025-05-15T11:56:07.904358240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 15 11:56:07.904479 containerd[1874]: time="2025-05-15T11:56:07.904373712Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 15 11:56:07.904479 containerd[1874]: time="2025-05-15T11:56:07.904383376Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 15 11:56:07.904479 containerd[1874]: time="2025-05-15T11:56:07.904390752Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 15 11:56:07.904479 containerd[1874]: time="2025-05-15T11:56:07.904402808Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 15 11:56:07.904479 containerd[1874]: time="2025-05-15T11:56:07.904421000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 15 11:56:07.904479 containerd[1874]: time="2025-05-15T11:56:07.904428152Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 15 11:56:07.904479 containerd[1874]: time="2025-05-15T11:56:07.904447416Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 15 11:56:07.904671 containerd[1874]: time="2025-05-15T11:56:07.904623328Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 15 11:56:07.904671 containerd[1874]: time="2025-05-15T11:56:07.904642528Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 15 11:56:07.904671 containerd[1874]: time="2025-05-15T11:56:07.904649224Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 15 11:56:07.904671 containerd[1874]: time="2025-05-15T11:56:07.904656184Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 15 11:56:07.904866 containerd[1874]: time="2025-05-15T11:56:07.904660856Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 15 11:56:07.904866 containerd[1874]: time="2025-05-15T11:56:07.904787352Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 15 11:56:07.904866 containerd[1874]: time="2025-05-15T11:56:07.904795480Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 15 11:56:07.904866 containerd[1874]: time="2025-05-15T11:56:07.904806360Z" level=info msg="runtime interface created" May 15 11:56:07.904866 containerd[1874]: time="2025-05-15T11:56:07.904809664Z" level=info msg="created NRI interface" May 15 11:56:07.904866 containerd[1874]: time="2025-05-15T11:56:07.904814680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 15 11:56:07.904866 containerd[1874]: time="2025-05-15T11:56:07.904822320Z" level=info msg="Connect containerd service" May 15 11:56:07.904866 containerd[1874]: time="2025-05-15T11:56:07.904845640Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 15 11:56:07.905776 containerd[1874]: time="2025-05-15T11:56:07.905586080Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 11:56:07.929059 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 11:56:08.036078 (kubelet)[2024]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 11:56:08.270519 kubelet[2024]: E0515 11:56:08.270397 2024 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 11:56:08.272639 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 11:56:08.272747 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 11:56:08.273099 systemd[1]: kubelet.service: Consumed 512ms CPU time, 245.1M memory peak. May 15 11:56:08.792632 containerd[1874]: time="2025-05-15T11:56:08.792542752Z" level=info msg="Start subscribing containerd event" May 15 11:56:08.792758 containerd[1874]: time="2025-05-15T11:56:08.792741712Z" level=info msg="Start recovering state" May 15 11:56:08.792852 containerd[1874]: time="2025-05-15T11:56:08.792836280Z" level=info msg="Start event monitor" May 15 11:56:08.792895 containerd[1874]: time="2025-05-15T11:56:08.792852704Z" level=info msg="Start cni network conf syncer for default" May 15 11:56:08.792895 containerd[1874]: time="2025-05-15T11:56:08.792864536Z" level=info msg="Start streaming server" May 15 11:56:08.792895 containerd[1874]: time="2025-05-15T11:56:08.792872024Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 15 11:56:08.792895 containerd[1874]: time="2025-05-15T11:56:08.792877920Z" level=info msg="runtime interface starting up..." May 15 11:56:08.792895 containerd[1874]: time="2025-05-15T11:56:08.792884504Z" level=info msg="starting plugins..." May 15 11:56:08.792895 containerd[1874]: time="2025-05-15T11:56:08.792897008Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 15 11:56:08.795818 containerd[1874]: time="2025-05-15T11:56:08.793088992Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 11:56:08.795818 containerd[1874]: time="2025-05-15T11:56:08.793138320Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 11:56:08.795818 containerd[1874]: time="2025-05-15T11:56:08.793184856Z" level=info msg="containerd successfully booted in 0.915833s" May 15 11:56:08.793568 systemd[1]: Started containerd.service - containerd container runtime. May 15 11:56:08.799569 systemd[1]: Reached target multi-user.target - Multi-User System. May 15 11:56:08.807484 systemd[1]: Startup finished in 1.646s (kernel) + 10.743s (initrd) + 11.731s (userspace) = 24.122s. May 15 11:56:09.127723 login[2003]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 15 11:56:09.128231 login[2002]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 15 11:56:09.137234 systemd-logind[1854]: New session 1 of user core. May 15 11:56:09.137507 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 15 11:56:09.140577 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 15 11:56:09.143330 systemd-logind[1854]: New session 2 of user core. May 15 11:56:09.168476 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 15 11:56:09.170888 systemd[1]: Starting user@500.service - User Manager for UID 500... May 15 11:56:09.178809 (systemd)[2049]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 11:56:09.180551 systemd-logind[1854]: New session c1 of user core. May 15 11:56:09.251344 waagent[1998]: 2025-05-15T11:56:09.251275Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 May 15 11:56:09.260269 waagent[1998]: 2025-05-15T11:56:09.256624Z INFO Daemon Daemon OS: flatcar 4334.0.0 May 15 11:56:09.260542 waagent[1998]: 2025-05-15T11:56:09.260506Z INFO Daemon Daemon Python: 3.11.12 May 15 11:56:09.264488 waagent[1998]: 2025-05-15T11:56:09.264376Z INFO Daemon Daemon Run daemon May 15 11:56:09.267793 waagent[1998]: 2025-05-15T11:56:09.267759Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4334.0.0' May 15 11:56:09.275752 waagent[1998]: 2025-05-15T11:56:09.275710Z INFO Daemon Daemon Using waagent for provisioning May 15 11:56:09.280669 waagent[1998]: 2025-05-15T11:56:09.280633Z INFO Daemon Daemon Activate resource disk May 15 11:56:09.284672 waagent[1998]: 2025-05-15T11:56:09.284638Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb May 15 11:56:09.293252 waagent[1998]: 2025-05-15T11:56:09.293213Z INFO Daemon Daemon Found device: None May 15 11:56:09.297237 waagent[1998]: 2025-05-15T11:56:09.297202Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology May 15 11:56:09.303876 waagent[1998]: 2025-05-15T11:56:09.303840Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 May 15 11:56:09.312996 waagent[1998]: 2025-05-15T11:56:09.312953Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 15 11:56:09.317481 waagent[1998]: 2025-05-15T11:56:09.317450Z INFO Daemon Daemon Running default provisioning handler May 15 11:56:09.327497 waagent[1998]: 2025-05-15T11:56:09.327449Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. May 15 11:56:09.338761 waagent[1998]: 2025-05-15T11:56:09.338727Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 15 11:56:09.384619 waagent[1998]: 2025-05-15T11:56:09.384523Z INFO Daemon Daemon cloud-init is enabled: False May 15 11:56:09.387969 systemd[2049]: Queued start job for default target default.target. May 15 11:56:09.388791 waagent[1998]: 2025-05-15T11:56:09.388753Z INFO Daemon Daemon Copying ovf-env.xml May 15 11:56:09.426618 systemd[2049]: Created slice app.slice - User Application Slice. May 15 11:56:09.426643 systemd[2049]: Reached target paths.target - Paths. May 15 11:56:09.426675 systemd[2049]: Reached target timers.target - Timers. May 15 11:56:09.427831 systemd[2049]: Starting dbus.socket - D-Bus User Message Bus Socket... May 15 11:56:09.435069 systemd[2049]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 15 11:56:09.435913 systemd[2049]: Reached target sockets.target - Sockets. May 15 11:56:09.435970 systemd[2049]: Reached target basic.target - Basic System. May 15 11:56:09.435991 systemd[2049]: Reached target default.target - Main User Target. May 15 11:56:09.436011 systemd[2049]: Startup finished in 251ms. May 15 11:56:09.436143 systemd[1]: Started user@500.service - User Manager for UID 500. May 15 11:56:09.437397 systemd[1]: Started session-1.scope - Session 1 of User core. May 15 11:56:09.440289 systemd[1]: Started session-2.scope - Session 2 of User core. May 15 11:56:09.562724 waagent[1998]: 2025-05-15T11:56:09.559029Z INFO Daemon Daemon Successfully mounted dvd May 15 11:56:09.587010 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. May 15 11:56:09.587796 waagent[1998]: 2025-05-15T11:56:09.587710Z INFO Daemon Daemon Detect protocol endpoint May 15 11:56:09.592076 waagent[1998]: 2025-05-15T11:56:09.592004Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 15 11:56:09.597071 waagent[1998]: 2025-05-15T11:56:09.597006Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler May 15 11:56:09.602148 waagent[1998]: 2025-05-15T11:56:09.602085Z INFO Daemon Daemon Test for route to 168.63.129.16 May 15 11:56:09.606717 waagent[1998]: 2025-05-15T11:56:09.606650Z INFO Daemon Daemon Route to 168.63.129.16 exists May 15 11:56:09.611582 waagent[1998]: 2025-05-15T11:56:09.611514Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 May 15 11:56:09.662216 waagent[1998]: 2025-05-15T11:56:09.662095Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 May 15 11:56:09.667710 waagent[1998]: 2025-05-15T11:56:09.667673Z INFO Daemon Daemon Wire protocol version:2012-11-30 May 15 11:56:09.672002 waagent[1998]: 2025-05-15T11:56:09.671937Z INFO Daemon Daemon Server preferred version:2015-04-05 May 15 11:56:09.782629 waagent[1998]: 2025-05-15T11:56:09.782558Z INFO Daemon Daemon Initializing goal state during protocol detection May 15 11:56:09.787861 waagent[1998]: 2025-05-15T11:56:09.787823Z INFO Daemon Daemon Forcing an update of the goal state. May 15 11:56:09.800428 waagent[1998]: 2025-05-15T11:56:09.800393Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] May 15 11:56:09.821303 waagent[1998]: 2025-05-15T11:56:09.821272Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.164 May 15 11:56:09.825796 waagent[1998]: 2025-05-15T11:56:09.825765Z INFO Daemon May 15 11:56:09.827993 waagent[1998]: 2025-05-15T11:56:09.827963Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 052b49cf-4c50-4ed0-83e7-9a4e60db0d49 eTag: 12566516059047332491 source: Fabric] May 15 11:56:09.836995 waagent[1998]: 2025-05-15T11:56:09.836965Z INFO Daemon The vmSettings originated via Fabric; will ignore them. May 15 11:56:09.842012 waagent[1998]: 2025-05-15T11:56:09.841984Z INFO Daemon May 15 11:56:09.844052 waagent[1998]: 2025-05-15T11:56:09.844024Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] May 15 11:56:09.852996 waagent[1998]: 2025-05-15T11:56:09.852970Z INFO Daemon Daemon Downloading artifacts profile blob May 15 11:56:09.934060 waagent[1998]: 2025-05-15T11:56:09.933993Z INFO Daemon Downloaded certificate {'thumbprint': 'EB5979F34EE25A4E33EEE1A66B0FEEC669DAE918', 'hasPrivateKey': False} May 15 11:56:09.941876 waagent[1998]: 2025-05-15T11:56:09.941845Z INFO Daemon Downloaded certificate {'thumbprint': '3AD752C5A1ACB9A4967067CBF39F2AD5386F64D6', 'hasPrivateKey': True} May 15 11:56:09.949261 waagent[1998]: 2025-05-15T11:56:09.949228Z INFO Daemon Fetch goal state completed May 15 11:56:09.998854 waagent[1998]: 2025-05-15T11:56:09.998820Z INFO Daemon Daemon Starting provisioning May 15 11:56:10.003370 waagent[1998]: 2025-05-15T11:56:10.003332Z INFO Daemon Daemon Handle ovf-env.xml. May 15 11:56:10.007632 waagent[1998]: 2025-05-15T11:56:10.007601Z INFO Daemon Daemon Set hostname [ci-4334.0.0-a-59732b8df3] May 15 11:56:10.029370 waagent[1998]: 2025-05-15T11:56:10.029340Z INFO Daemon Daemon Publish hostname [ci-4334.0.0-a-59732b8df3] May 15 11:56:10.034603 waagent[1998]: 2025-05-15T11:56:10.034568Z INFO Daemon Daemon Examine /proc/net/route for primary interface May 15 11:56:10.039924 waagent[1998]: 2025-05-15T11:56:10.039891Z INFO Daemon Daemon Primary interface is [eth0] May 15 11:56:10.049802 systemd-networkd[1606]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 11:56:10.049813 systemd-networkd[1606]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 11:56:10.049858 systemd-networkd[1606]: eth0: DHCP lease lost May 15 11:56:10.050743 waagent[1998]: 2025-05-15T11:56:10.050517Z INFO Daemon Daemon Create user account if not exists May 15 11:56:10.055149 waagent[1998]: 2025-05-15T11:56:10.055117Z INFO Daemon Daemon User core already exists, skip useradd May 15 11:56:10.062842 waagent[1998]: 2025-05-15T11:56:10.059589Z INFO Daemon Daemon Configure sudoer May 15 11:56:10.066313 waagent[1998]: 2025-05-15T11:56:10.066278Z INFO Daemon Daemon Configure sshd May 15 11:56:10.076616 waagent[1998]: 2025-05-15T11:56:10.076526Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. May 15 11:56:10.086691 waagent[1998]: 2025-05-15T11:56:10.086661Z INFO Daemon Daemon Deploy ssh public key. May 15 11:56:10.091508 systemd-networkd[1606]: eth0: DHCPv4 address 10.200.20.23/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 15 11:56:11.201930 waagent[1998]: 2025-05-15T11:56:11.198991Z INFO Daemon Daemon Provisioning complete May 15 11:56:11.211407 waagent[1998]: 2025-05-15T11:56:11.211376Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping May 15 11:56:11.215729 waagent[1998]: 2025-05-15T11:56:11.215702Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. May 15 11:56:11.222551 waagent[1998]: 2025-05-15T11:56:11.222528Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent May 15 11:56:11.315358 waagent[2103]: 2025-05-15T11:56:11.314977Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) May 15 11:56:11.315358 waagent[2103]: 2025-05-15T11:56:11.315078Z INFO ExtHandler ExtHandler OS: flatcar 4334.0.0 May 15 11:56:11.315358 waagent[2103]: 2025-05-15T11:56:11.315115Z INFO ExtHandler ExtHandler Python: 3.11.12 May 15 11:56:11.315358 waagent[2103]: 2025-05-15T11:56:11.315148Z INFO ExtHandler ExtHandler CPU Arch: aarch64 May 15 11:56:11.378925 waagent[2103]: 2025-05-15T11:56:11.378877Z INFO ExtHandler ExtHandler Distro: flatcar-4334.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; May 15 11:56:11.379153 waagent[2103]: 2025-05-15T11:56:11.379127Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 15 11:56:11.379256 waagent[2103]: 2025-05-15T11:56:11.379234Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 15 11:56:11.384961 waagent[2103]: 2025-05-15T11:56:11.384918Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] May 15 11:56:11.390467 waagent[2103]: 2025-05-15T11:56:11.389652Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 May 15 11:56:11.390467 waagent[2103]: 2025-05-15T11:56:11.389968Z INFO ExtHandler May 15 11:56:11.390467 waagent[2103]: 2025-05-15T11:56:11.390019Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 34a7ce32-b472-4eb0-bedf-fa45643edf18 eTag: 12566516059047332491 source: Fabric] May 15 11:56:11.390467 waagent[2103]: 2025-05-15T11:56:11.390214Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. May 15 11:56:11.390632 waagent[2103]: 2025-05-15T11:56:11.390602Z INFO ExtHandler May 15 11:56:11.390666 waagent[2103]: 2025-05-15T11:56:11.390650Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] May 15 11:56:11.393748 waagent[2103]: 2025-05-15T11:56:11.393724Z INFO ExtHandler ExtHandler Downloading artifacts profile blob May 15 11:56:11.449884 waagent[2103]: 2025-05-15T11:56:11.449835Z INFO ExtHandler Downloaded certificate {'thumbprint': 'EB5979F34EE25A4E33EEE1A66B0FEEC669DAE918', 'hasPrivateKey': False} May 15 11:56:11.450135 waagent[2103]: 2025-05-15T11:56:11.450106Z INFO ExtHandler Downloaded certificate {'thumbprint': '3AD752C5A1ACB9A4967067CBF39F2AD5386F64D6', 'hasPrivateKey': True} May 15 11:56:11.450410 waagent[2103]: 2025-05-15T11:56:11.450383Z INFO ExtHandler Fetch goal state completed May 15 11:56:11.461698 waagent[2103]: 2025-05-15T11:56:11.461623Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) May 15 11:56:11.464697 waagent[2103]: 2025-05-15T11:56:11.464655Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2103 May 15 11:56:11.464786 waagent[2103]: 2025-05-15T11:56:11.464762Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** May 15 11:56:11.465024 waagent[2103]: 2025-05-15T11:56:11.465000Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** May 15 11:56:11.466075 waagent[2103]: 2025-05-15T11:56:11.466045Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4334.0.0', '', 'Flatcar Container Linux by Kinvolk'] May 15 11:56:11.466377 waagent[2103]: 2025-05-15T11:56:11.466350Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4334.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported May 15 11:56:11.466516 waagent[2103]: 2025-05-15T11:56:11.466491Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False May 15 11:56:11.466936 waagent[2103]: 2025-05-15T11:56:11.466908Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules May 15 11:56:11.485344 waagent[2103]: 2025-05-15T11:56:11.485314Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 15 11:56:11.485488 waagent[2103]: 2025-05-15T11:56:11.485461Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 15 11:56:11.490057 waagent[2103]: 2025-05-15T11:56:11.490029Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 15 11:56:11.494388 systemd[1]: Reload requested from client PID 2120 ('systemctl') (unit waagent.service)... May 15 11:56:11.494588 systemd[1]: Reloading... May 15 11:56:11.556467 zram_generator::config[2160]: No configuration found. May 15 11:56:11.618388 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 11:56:11.698153 systemd[1]: Reloading finished in 203 ms. May 15 11:56:11.718096 waagent[2103]: 2025-05-15T11:56:11.718004Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service May 15 11:56:11.718143 waagent[2103]: 2025-05-15T11:56:11.718126Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully May 15 11:56:11.992206 waagent[2103]: 2025-05-15T11:56:11.992098Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. May 15 11:56:11.992406 waagent[2103]: 2025-05-15T11:56:11.992378Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] May 15 11:56:11.992991 waagent[2103]: 2025-05-15T11:56:11.992955Z INFO ExtHandler ExtHandler Starting env monitor service. May 15 11:56:11.993252 waagent[2103]: 2025-05-15T11:56:11.993214Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 15 11:56:11.993615 waagent[2103]: 2025-05-15T11:56:11.993577Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 15 11:56:11.993716 waagent[2103]: 2025-05-15T11:56:11.993692Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 15 11:56:11.993755 waagent[2103]: 2025-05-15T11:56:11.993720Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 15 11:56:11.994445 waagent[2103]: 2025-05-15T11:56:11.993891Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 15 11:56:11.994445 waagent[2103]: 2025-05-15T11:56:11.993952Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 15 11:56:11.994445 waagent[2103]: 2025-05-15T11:56:11.994052Z INFO EnvHandler ExtHandler Configure routes May 15 11:56:11.994445 waagent[2103]: 2025-05-15T11:56:11.994089Z INFO EnvHandler ExtHandler Gateway:None May 15 11:56:11.994445 waagent[2103]: 2025-05-15T11:56:11.994111Z INFO EnvHandler ExtHandler Routes:None May 15 11:56:11.994669 waagent[2103]: 2025-05-15T11:56:11.994637Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 15 11:56:11.994791 waagent[2103]: 2025-05-15T11:56:11.994768Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 15 11:56:11.994904 waagent[2103]: 2025-05-15T11:56:11.994880Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 15 11:56:11.995098 waagent[2103]: 2025-05-15T11:56:11.995073Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 15 11:56:11.995581 waagent[2103]: 2025-05-15T11:56:11.995539Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 15 11:56:11.996305 waagent[2103]: 2025-05-15T11:56:11.996280Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 15 11:56:11.996305 waagent[2103]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 15 11:56:11.996305 waagent[2103]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 May 15 11:56:11.996305 waagent[2103]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 15 11:56:11.996305 waagent[2103]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 15 11:56:11.996305 waagent[2103]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 15 11:56:11.996305 waagent[2103]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 15 11:56:12.003123 waagent[2103]: 2025-05-15T11:56:12.003096Z INFO ExtHandler ExtHandler May 15 11:56:12.003285 waagent[2103]: 2025-05-15T11:56:12.003261Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: b758d3e6-2e17-4ed0-ad53-21b224420fcb correlation f0e5c20d-3ab3-4719-a95b-6fe9420e88e9 created: 2025-05-15T11:55:03.026459Z] May 15 11:56:12.003628 waagent[2103]: 2025-05-15T11:56:12.003601Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. May 15 11:56:12.004106 waagent[2103]: 2025-05-15T11:56:12.004076Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] May 15 11:56:12.025915 waagent[2103]: 2025-05-15T11:56:12.025886Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command May 15 11:56:12.025915 waagent[2103]: Try `iptables -h' or 'iptables --help' for more information.) May 15 11:56:12.026279 waagent[2103]: 2025-05-15T11:56:12.026252Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: F8E65927-DF5B-47F2-B25C-B2B4C70D3F76;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] May 15 11:56:12.045157 waagent[2103]: 2025-05-15T11:56:12.045127Z INFO MonitorHandler ExtHandler Network interfaces: May 15 11:56:12.045157 waagent[2103]: Executing ['ip', '-a', '-o', 'link']: May 15 11:56:12.045157 waagent[2103]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 15 11:56:12.045157 waagent[2103]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c5:fc:d4 brd ff:ff:ff:ff:ff:ff May 15 11:56:12.045157 waagent[2103]: 3: enP25826s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c5:fc:d4 brd ff:ff:ff:ff:ff:ff\ altname enP25826p0s2 May 15 11:56:12.045157 waagent[2103]: Executing ['ip', '-4', '-a', '-o', 'address']: May 15 11:56:12.045157 waagent[2103]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 15 11:56:12.045157 waagent[2103]: 2: eth0 inet 10.200.20.23/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever May 15 11:56:12.045157 waagent[2103]: Executing ['ip', '-6', '-a', '-o', 'address']: May 15 11:56:12.045157 waagent[2103]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever May 15 11:56:12.045157 waagent[2103]: 2: eth0 inet6 fe80::20d:3aff:fec5:fcd4/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 15 11:56:12.045157 waagent[2103]: 3: enP25826s1 inet6 fe80::20d:3aff:fec5:fcd4/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 15 11:56:12.078824 waagent[2103]: 2025-05-15T11:56:12.078782Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: May 15 11:56:12.078824 waagent[2103]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 15 11:56:12.078824 waagent[2103]: pkts bytes target prot opt in out source destination May 15 11:56:12.078824 waagent[2103]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 15 11:56:12.078824 waagent[2103]: pkts bytes target prot opt in out source destination May 15 11:56:12.078824 waagent[2103]: Chain OUTPUT (policy ACCEPT 3 packets, 164 bytes) May 15 11:56:12.078824 waagent[2103]: pkts bytes target prot opt in out source destination May 15 11:56:12.078824 waagent[2103]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 15 11:56:12.078824 waagent[2103]: 7 883 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 15 11:56:12.078824 waagent[2103]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 15 11:56:12.081149 waagent[2103]: 2025-05-15T11:56:12.081109Z INFO EnvHandler ExtHandler Current Firewall rules: May 15 11:56:12.081149 waagent[2103]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 15 11:56:12.081149 waagent[2103]: pkts bytes target prot opt in out source destination May 15 11:56:12.081149 waagent[2103]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 15 11:56:12.081149 waagent[2103]: pkts bytes target prot opt in out source destination May 15 11:56:12.081149 waagent[2103]: Chain OUTPUT (policy ACCEPT 4 packets, 224 bytes) May 15 11:56:12.081149 waagent[2103]: pkts bytes target prot opt in out source destination May 15 11:56:12.081149 waagent[2103]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 15 11:56:12.081149 waagent[2103]: 8 935 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 15 11:56:12.081149 waagent[2103]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 15 11:56:12.081317 waagent[2103]: 2025-05-15T11:56:12.081294Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 May 15 11:56:18.283936 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 11:56:18.285227 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 11:56:18.384303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 11:56:18.386925 (kubelet)[2252]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 11:56:18.501157 kubelet[2252]: E0515 11:56:18.501112 2252 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 11:56:18.503712 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 11:56:18.503831 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 11:56:18.504113 systemd[1]: kubelet.service: Consumed 100ms CPU time, 101.2M memory peak. May 15 11:56:19.945379 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 15 11:56:19.946303 systemd[1]: Started sshd@0-10.200.20.23:22-10.200.16.10:47732.service - OpenSSH per-connection server daemon (10.200.16.10:47732). May 15 11:56:20.490610 sshd[2260]: Accepted publickey for core from 10.200.16.10 port 47732 ssh2: RSA SHA256:eqZH8i+mbXa4bcBb58m8yxDt9xvP66g2WQqbkjlQjHI May 15 11:56:20.491629 sshd-session[2260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:56:20.495222 systemd-logind[1854]: New session 3 of user core. May 15 11:56:20.506564 systemd[1]: Started session-3.scope - Session 3 of User core. May 15 11:56:20.901656 systemd[1]: Started sshd@1-10.200.20.23:22-10.200.16.10:47734.service - OpenSSH per-connection server daemon (10.200.16.10:47734). May 15 11:56:21.348640 sshd[2265]: Accepted publickey for core from 10.200.16.10 port 47734 ssh2: RSA SHA256:eqZH8i+mbXa4bcBb58m8yxDt9xvP66g2WQqbkjlQjHI May 15 11:56:21.349617 sshd-session[2265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:56:21.353334 systemd-logind[1854]: New session 4 of user core. May 15 11:56:21.359543 systemd[1]: Started session-4.scope - Session 4 of User core. May 15 11:56:21.669525 sshd[2267]: Connection closed by 10.200.16.10 port 47734 May 15 11:56:21.670054 sshd-session[2265]: pam_unix(sshd:session): session closed for user core May 15 11:56:21.672734 systemd[1]: sshd@1-10.200.20.23:22-10.200.16.10:47734.service: Deactivated successfully. May 15 11:56:21.673995 systemd[1]: session-4.scope: Deactivated successfully. May 15 11:56:21.674541 systemd-logind[1854]: Session 4 logged out. Waiting for processes to exit. May 15 11:56:21.675778 systemd-logind[1854]: Removed session 4. May 15 11:56:21.748721 systemd[1]: Started sshd@2-10.200.20.23:22-10.200.16.10:47742.service - OpenSSH per-connection server daemon (10.200.16.10:47742). May 15 11:56:22.165233 sshd[2273]: Accepted publickey for core from 10.200.16.10 port 47742 ssh2: RSA SHA256:eqZH8i+mbXa4bcBb58m8yxDt9xvP66g2WQqbkjlQjHI May 15 11:56:22.166208 sshd-session[2273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:56:22.169590 systemd-logind[1854]: New session 5 of user core. May 15 11:56:22.176640 systemd[1]: Started session-5.scope - Session 5 of User core. May 15 11:56:22.487860 sshd[2275]: Connection closed by 10.200.16.10 port 47742 May 15 11:56:22.487704 sshd-session[2273]: pam_unix(sshd:session): session closed for user core May 15 11:56:22.490687 systemd[1]: sshd@2-10.200.20.23:22-10.200.16.10:47742.service: Deactivated successfully. May 15 11:56:22.493499 systemd[1]: session-5.scope: Deactivated successfully. May 15 11:56:22.494037 systemd-logind[1854]: Session 5 logged out. Waiting for processes to exit. May 15 11:56:22.494984 systemd-logind[1854]: Removed session 5. May 15 11:56:22.570584 systemd[1]: Started sshd@3-10.200.20.23:22-10.200.16.10:47744.service - OpenSSH per-connection server daemon (10.200.16.10:47744). May 15 11:56:23.015396 sshd[2281]: Accepted publickey for core from 10.200.16.10 port 47744 ssh2: RSA SHA256:eqZH8i+mbXa4bcBb58m8yxDt9xvP66g2WQqbkjlQjHI May 15 11:56:23.016380 sshd-session[2281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:56:23.019811 systemd-logind[1854]: New session 6 of user core. May 15 11:56:23.026563 systemd[1]: Started session-6.scope - Session 6 of User core. May 15 11:56:23.348467 sshd[2283]: Connection closed by 10.200.16.10 port 47744 May 15 11:56:23.348918 sshd-session[2281]: pam_unix(sshd:session): session closed for user core May 15 11:56:23.351808 systemd[1]: sshd@3-10.200.20.23:22-10.200.16.10:47744.service: Deactivated successfully. May 15 11:56:23.353261 systemd[1]: session-6.scope: Deactivated successfully. May 15 11:56:23.354000 systemd-logind[1854]: Session 6 logged out. Waiting for processes to exit. May 15 11:56:23.355278 systemd-logind[1854]: Removed session 6. May 15 11:56:23.433351 systemd[1]: Started sshd@4-10.200.20.23:22-10.200.16.10:47760.service - OpenSSH per-connection server daemon (10.200.16.10:47760). May 15 11:56:23.881834 sshd[2289]: Accepted publickey for core from 10.200.16.10 port 47760 ssh2: RSA SHA256:eqZH8i+mbXa4bcBb58m8yxDt9xvP66g2WQqbkjlQjHI May 15 11:56:23.882801 sshd-session[2289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:56:23.886136 systemd-logind[1854]: New session 7 of user core. May 15 11:56:23.896704 systemd[1]: Started session-7.scope - Session 7 of User core. May 15 11:56:24.257730 sudo[2292]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 11:56:24.257941 sudo[2292]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 11:56:24.287958 sudo[2292]: pam_unix(sudo:session): session closed for user root May 15 11:56:24.357600 sshd[2291]: Connection closed by 10.200.16.10 port 47760 May 15 11:56:24.358027 sshd-session[2289]: pam_unix(sshd:session): session closed for user core May 15 11:56:24.360723 systemd[1]: sshd@4-10.200.20.23:22-10.200.16.10:47760.service: Deactivated successfully. May 15 11:56:24.363498 systemd[1]: session-7.scope: Deactivated successfully. May 15 11:56:24.364019 systemd-logind[1854]: Session 7 logged out. Waiting for processes to exit. May 15 11:56:24.365034 systemd-logind[1854]: Removed session 7. May 15 11:56:24.431605 systemd[1]: Started sshd@5-10.200.20.23:22-10.200.16.10:47762.service - OpenSSH per-connection server daemon (10.200.16.10:47762). May 15 11:56:24.844066 sshd[2298]: Accepted publickey for core from 10.200.16.10 port 47762 ssh2: RSA SHA256:eqZH8i+mbXa4bcBb58m8yxDt9xvP66g2WQqbkjlQjHI May 15 11:56:24.845075 sshd-session[2298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:56:24.848323 systemd-logind[1854]: New session 8 of user core. May 15 11:56:24.854548 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 11:56:25.077946 sudo[2302]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 11:56:25.078384 sudo[2302]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 11:56:25.084232 sudo[2302]: pam_unix(sudo:session): session closed for user root May 15 11:56:25.087357 sudo[2301]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 15 11:56:25.087553 sudo[2301]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 11:56:25.093837 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 11:56:25.117666 augenrules[2324]: No rules May 15 11:56:25.118411 systemd[1]: audit-rules.service: Deactivated successfully. May 15 11:56:25.118593 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 11:56:25.119393 sudo[2301]: pam_unix(sudo:session): session closed for user root May 15 11:56:25.208468 sshd[2300]: Connection closed by 10.200.16.10 port 47762 May 15 11:56:25.208802 sshd-session[2298]: pam_unix(sshd:session): session closed for user core May 15 11:56:25.211201 systemd[1]: sshd@5-10.200.20.23:22-10.200.16.10:47762.service: Deactivated successfully. May 15 11:56:25.212227 systemd[1]: session-8.scope: Deactivated successfully. May 15 11:56:25.213637 systemd-logind[1854]: Session 8 logged out. Waiting for processes to exit. May 15 11:56:25.214847 systemd-logind[1854]: Removed session 8. May 15 11:56:25.285629 systemd[1]: Started sshd@6-10.200.20.23:22-10.200.16.10:47776.service - OpenSSH per-connection server daemon (10.200.16.10:47776). May 15 11:56:25.700525 sshd[2333]: Accepted publickey for core from 10.200.16.10 port 47776 ssh2: RSA SHA256:eqZH8i+mbXa4bcBb58m8yxDt9xvP66g2WQqbkjlQjHI May 15 11:56:25.701510 sshd-session[2333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:56:25.704898 systemd-logind[1854]: New session 9 of user core. May 15 11:56:25.713702 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 11:56:25.936820 sudo[2336]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 11:56:25.937012 sudo[2336]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 11:56:27.190662 systemd[1]: Starting docker.service - Docker Application Container Engine... May 15 11:56:27.200694 (dockerd)[2353]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 15 11:56:28.248465 dockerd[2353]: time="2025-05-15T11:56:28.248235824Z" level=info msg="Starting up" May 15 11:56:28.250369 dockerd[2353]: time="2025-05-15T11:56:28.250314864Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 15 11:56:28.283146 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1161791784-merged.mount: Deactivated successfully. May 15 11:56:28.399506 dockerd[2353]: time="2025-05-15T11:56:28.399477608Z" level=info msg="Loading containers: start." May 15 11:56:28.440461 kernel: Initializing XFRM netlink socket May 15 11:56:28.533905 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 11:56:28.536609 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 11:56:28.712237 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 11:56:28.718751 (kubelet)[2432]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 11:56:28.742506 kubelet[2432]: E0515 11:56:28.742455 2432 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 11:56:28.744140 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 11:56:28.744243 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 11:56:28.744632 systemd[1]: kubelet.service: Consumed 96ms CPU time, 101.9M memory peak. May 15 11:56:29.157613 systemd-networkd[1606]: docker0: Link UP May 15 11:56:29.167965 dockerd[2353]: time="2025-05-15T11:56:29.167897664Z" level=info msg="Loading containers: done." May 15 11:56:29.248690 dockerd[2353]: time="2025-05-15T11:56:29.248652904Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 11:56:29.248983 dockerd[2353]: time="2025-05-15T11:56:29.248721216Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 15 11:56:29.248983 dockerd[2353]: time="2025-05-15T11:56:29.248814184Z" level=info msg="Initializing buildkit" May 15 11:56:29.282848 dockerd[2353]: time="2025-05-15T11:56:29.282822944Z" level=info msg="Completed buildkit initialization" May 15 11:56:29.287842 dockerd[2353]: time="2025-05-15T11:56:29.287813928Z" level=info msg="Daemon has completed initialization" May 15 11:56:29.288069 dockerd[2353]: time="2025-05-15T11:56:29.288029152Z" level=info msg="API listen on /run/docker.sock" May 15 11:56:29.288205 systemd[1]: Started docker.service - Docker Application Container Engine. May 15 11:56:30.090826 containerd[1874]: time="2025-05-15T11:56:30.090788840Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 15 11:56:30.872580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1074693554.mount: Deactivated successfully. May 15 11:56:30.910285 chronyd[1852]: Selected source PHC0 May 15 11:56:31.895708 containerd[1874]: time="2025-05-15T11:56:31.895654407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:56:31.897522 containerd[1874]: time="2025-05-15T11:56:31.897369447Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=26233118" May 15 11:56:31.902029 containerd[1874]: time="2025-05-15T11:56:31.901989697Z" level=info msg="ImageCreate event name:\"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:56:31.905288 containerd[1874]: time="2025-05-15T11:56:31.905262993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:56:31.905983 containerd[1874]: time="2025-05-15T11:56:31.905953359Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"26229918\" in 1.81512571s" May 15 11:56:31.906075 containerd[1874]: time="2025-05-15T11:56:31.906061416Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\"" May 15 11:56:31.906770 containerd[1874]: time="2025-05-15T11:56:31.906748194Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 15 11:56:33.028507 containerd[1874]: time="2025-05-15T11:56:33.028463726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:56:33.030447 containerd[1874]: time="2025-05-15T11:56:33.030305798Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=22529571" May 15 11:56:33.033176 containerd[1874]: time="2025-05-15T11:56:33.033153822Z" level=info msg="ImageCreate event name:\"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:56:33.036945 containerd[1874]: time="2025-05-15T11:56:33.036920974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:56:33.037470 containerd[1874]: time="2025-05-15T11:56:33.037355190Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"23971132\" in 1.130505168s" May 15 11:56:33.037470 containerd[1874]: time="2025-05-15T11:56:33.037382566Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\"" May 15 11:56:33.037912 containerd[1874]: time="2025-05-15T11:56:33.037890974Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 15 11:56:34.002470 containerd[1874]: time="2025-05-15T11:56:34.002019286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:56:34.004813 containerd[1874]: time="2025-05-15T11:56:34.004784086Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=17482173" May 15 11:56:34.007746 containerd[1874]: time="2025-05-15T11:56:34.007702966Z" level=info msg="ImageCreate event name:\"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:56:34.012292 containerd[1874]: time="2025-05-15T11:56:34.012245206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:56:34.012903 containerd[1874]: time="2025-05-15T11:56:34.012792862Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"18923752\" in 974.878504ms" May 15 11:56:34.012903 containerd[1874]: time="2025-05-15T11:56:34.012820318Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\"" May 15 11:56:34.013456 containerd[1874]: time="2025-05-15T11:56:34.013407814Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 15 11:56:34.952263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2873818661.mount: Deactivated successfully. May 15 11:56:35.226308 containerd[1874]: time="2025-05-15T11:56:35.226188510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:56:35.228183 containerd[1874]: time="2025-05-15T11:56:35.228152014Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=27370351" May 15 11:56:35.231454 containerd[1874]: time="2025-05-15T11:56:35.231404654Z" level=info msg="ImageCreate event name:\"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:56:35.236197 containerd[1874]: time="2025-05-15T11:56:35.236164734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:56:35.236419 containerd[1874]: time="2025-05-15T11:56:35.236397542Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"27369370\" in 1.222968736s" May 15 11:56:35.236473 containerd[1874]: time="2025-05-15T11:56:35.236421382Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" May 15 11:56:35.237233 containerd[1874]: time="2025-05-15T11:56:35.237211302Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 15 11:56:36.425166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3303573706.mount: Deactivated successfully. May 15 11:56:37.240787 containerd[1874]: time="2025-05-15T11:56:37.240744254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:56:37.246470 containerd[1874]: time="2025-05-15T11:56:37.246445334Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" May 15 11:56:37.250571 containerd[1874]: time="2025-05-15T11:56:37.250549054Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:56:37.255552 containerd[1874]: time="2025-05-15T11:56:37.255511918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:56:37.256176 containerd[1874]: time="2025-05-15T11:56:37.255781030Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.018478056s" May 15 11:56:37.256176 containerd[1874]: time="2025-05-15T11:56:37.255806734Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 15 11:56:37.256320 containerd[1874]: time="2025-05-15T11:56:37.256279838Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 15 11:56:37.821969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2302894534.mount: Deactivated successfully. May 15 11:56:37.846176 containerd[1874]: time="2025-05-15T11:56:37.845730566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 11:56:37.848313 containerd[1874]: time="2025-05-15T11:56:37.848293614Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" May 15 11:56:37.852130 containerd[1874]: time="2025-05-15T11:56:37.852108358Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 11:56:37.857543 containerd[1874]: time="2025-05-15T11:56:37.857507734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 11:56:37.857971 containerd[1874]: time="2025-05-15T11:56:37.857949326Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 601.646488ms" May 15 11:56:37.858049 containerd[1874]: time="2025-05-15T11:56:37.858037078Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 15 11:56:37.858650 containerd[1874]: time="2025-05-15T11:56:37.858633358Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 15 11:56:38.481179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1644531085.mount: Deactivated successfully. May 15 11:56:38.784193 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 15 11:56:38.786420 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 11:56:38.933074 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 11:56:38.937672 (kubelet)[2720]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 11:56:39.009034 kubelet[2720]: E0515 11:56:39.008972 2720 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 11:56:39.011051 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 11:56:39.011269 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 11:56:39.011824 systemd[1]: kubelet.service: Consumed 98ms CPU time, 102.7M memory peak. May 15 11:56:41.049411 containerd[1874]: time="2025-05-15T11:56:41.049356949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:56:41.052144 containerd[1874]: time="2025-05-15T11:56:41.052119752Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812469" May 15 11:56:41.055839 containerd[1874]: time="2025-05-15T11:56:41.055795920Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:56:41.060153 containerd[1874]: time="2025-05-15T11:56:41.060106864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:56:41.060792 containerd[1874]: time="2025-05-15T11:56:41.060693926Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.20196992s" May 15 11:56:41.060792 containerd[1874]: time="2025-05-15T11:56:41.060719142Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 15 11:56:43.158574 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 11:56:43.158874 systemd[1]: kubelet.service: Consumed 98ms CPU time, 102.7M memory peak. May 15 11:56:43.160925 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 11:56:43.180493 systemd[1]: Reload requested from client PID 2788 ('systemctl') (unit session-9.scope)... May 15 11:56:43.180503 systemd[1]: Reloading... May 15 11:56:43.257465 zram_generator::config[2836]: No configuration found. May 15 11:56:43.323998 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 11:56:43.403900 systemd[1]: Reloading finished in 223 ms. May 15 11:56:43.447772 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 15 11:56:43.447823 systemd[1]: kubelet.service: Failed with result 'signal'. May 15 11:56:43.448009 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 11:56:43.448041 systemd[1]: kubelet.service: Consumed 68ms CPU time, 90.1M memory peak. May 15 11:56:43.449082 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 11:56:43.750701 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 11:56:43.754983 (kubelet)[2900]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 11:56:43.800014 kubelet[2900]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 11:56:43.800014 kubelet[2900]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 11:56:43.800014 kubelet[2900]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 11:56:43.800014 kubelet[2900]: I0515 11:56:43.799541 2900 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 11:56:44.065445 kubelet[2900]: I0515 11:56:44.065404 2900 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 15 11:56:44.065445 kubelet[2900]: I0515 11:56:44.065449 2900 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 11:56:44.065652 kubelet[2900]: I0515 11:56:44.065637 2900 server.go:954] "Client rotation is on, will bootstrap in background" May 15 11:56:44.075302 kubelet[2900]: E0515 11:56:44.075275 2900 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.23:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.23:6443: connect: connection refused" logger="UnhandledError" May 15 11:56:44.077998 kubelet[2900]: I0515 11:56:44.077891 2900 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 11:56:44.085582 kubelet[2900]: I0515 11:56:44.085563 2900 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 15 11:56:44.087920 kubelet[2900]: I0515 11:56:44.087904 2900 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 11:56:44.088082 kubelet[2900]: I0515 11:56:44.088060 2900 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 11:56:44.088196 kubelet[2900]: I0515 11:56:44.088080 2900 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4334.0.0-a-59732b8df3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 11:56:44.088271 kubelet[2900]: I0515 11:56:44.088203 2900 topology_manager.go:138] "Creating topology manager with none policy" May 15 11:56:44.088271 kubelet[2900]: I0515 11:56:44.088209 2900 container_manager_linux.go:304] "Creating device plugin manager" May 15 11:56:44.088317 kubelet[2900]: I0515 11:56:44.088307 2900 state_mem.go:36] "Initialized new in-memory state store" May 15 11:56:44.090136 kubelet[2900]: I0515 11:56:44.090122 2900 kubelet.go:446] "Attempting to sync node with API server" May 15 11:56:44.090173 kubelet[2900]: I0515 11:56:44.090140 2900 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 11:56:44.090173 kubelet[2900]: I0515 11:56:44.090159 2900 kubelet.go:352] "Adding apiserver pod source" May 15 11:56:44.090173 kubelet[2900]: I0515 11:56:44.090169 2900 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 11:56:44.093029 kubelet[2900]: W0515 11:56:44.092979 2900 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.23:6443: connect: connection refused May 15 11:56:44.093029 kubelet[2900]: E0515 11:56:44.093020 2900 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.23:6443: connect: connection refused" logger="UnhandledError" May 15 11:56:44.093255 kubelet[2900]: W0515 11:56:44.093236 2900 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334.0.0-a-59732b8df3&limit=500&resourceVersion=0": dial tcp 10.200.20.23:6443: connect: connection refused May 15 11:56:44.093278 kubelet[2900]: E0515 11:56:44.093258 2900 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334.0.0-a-59732b8df3&limit=500&resourceVersion=0\": dial tcp 10.200.20.23:6443: connect: connection refused" logger="UnhandledError" May 15 11:56:44.093419 kubelet[2900]: I0515 11:56:44.093407 2900 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 15 11:56:44.093718 kubelet[2900]: I0515 11:56:44.093700 2900 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 11:56:44.093769 kubelet[2900]: W0515 11:56:44.093743 2900 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 11:56:44.094130 kubelet[2900]: I0515 11:56:44.094111 2900 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 11:56:44.094171 kubelet[2900]: I0515 11:56:44.094136 2900 server.go:1287] "Started kubelet" May 15 11:56:44.096359 kubelet[2900]: I0515 11:56:44.096336 2900 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 15 11:56:44.096728 kubelet[2900]: I0515 11:56:44.096712 2900 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 11:56:44.097793 kubelet[2900]: I0515 11:56:44.097778 2900 server.go:490] "Adding debug handlers to kubelet server" May 15 11:56:44.098643 kubelet[2900]: I0515 11:56:44.098601 2900 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 11:56:44.098856 kubelet[2900]: I0515 11:56:44.098843 2900 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 11:56:44.100341 kubelet[2900]: I0515 11:56:44.100309 2900 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 11:56:44.100482 kubelet[2900]: E0515 11:56:44.100467 2900 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4334.0.0-a-59732b8df3\" not found" May 15 11:56:44.100948 kubelet[2900]: I0515 11:56:44.100930 2900 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 11:56:44.102151 kubelet[2900]: E0515 11:56:44.102074 2900 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.23:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.23:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4334.0.0-a-59732b8df3.183fb15f7b72144d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4334.0.0-a-59732b8df3,UID:ci-4334.0.0-a-59732b8df3,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4334.0.0-a-59732b8df3,},FirstTimestamp:2025-05-15 11:56:44.094125133 +0000 UTC m=+0.336818655,LastTimestamp:2025-05-15 11:56:44.094125133 +0000 UTC m=+0.336818655,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4334.0.0-a-59732b8df3,}" May 15 11:56:44.102305 kubelet[2900]: E0515 11:56:44.102289 2900 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334.0.0-a-59732b8df3?timeout=10s\": dial tcp 10.200.20.23:6443: connect: connection refused" interval="200ms" May 15 11:56:44.102390 kubelet[2900]: I0515 11:56:44.102370 2900 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 11:56:44.102427 kubelet[2900]: I0515 11:56:44.102413 2900 reconciler.go:26] "Reconciler: start to sync state" May 15 11:56:44.103223 kubelet[2900]: I0515 11:56:44.102547 2900 factory.go:221] Registration of the systemd container factory successfully May 15 11:56:44.103223 kubelet[2900]: I0515 11:56:44.102601 2900 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 11:56:44.103223 kubelet[2900]: W0515 11:56:44.103151 2900 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.23:6443: connect: connection refused May 15 11:56:44.103223 kubelet[2900]: E0515 11:56:44.103183 2900 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.23:6443: connect: connection refused" logger="UnhandledError" May 15 11:56:44.104050 kubelet[2900]: I0515 11:56:44.104029 2900 factory.go:221] Registration of the containerd container factory successfully May 15 11:56:44.108577 kubelet[2900]: E0515 11:56:44.108558 2900 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 11:56:44.129758 kubelet[2900]: I0515 11:56:44.129742 2900 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 11:56:44.129758 kubelet[2900]: I0515 11:56:44.129753 2900 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 11:56:44.129840 kubelet[2900]: I0515 11:56:44.129765 2900 state_mem.go:36] "Initialized new in-memory state store" May 15 11:56:44.148248 kubelet[2900]: E0515 11:56:44.148190 2900 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.23:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.23:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4334.0.0-a-59732b8df3.183fb15f7b72144d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4334.0.0-a-59732b8df3,UID:ci-4334.0.0-a-59732b8df3,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4334.0.0-a-59732b8df3,},FirstTimestamp:2025-05-15 11:56:44.094125133 +0000 UTC m=+0.336818655,LastTimestamp:2025-05-15 11:56:44.094125133 +0000 UTC m=+0.336818655,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4334.0.0-a-59732b8df3,}" May 15 11:56:44.201394 kubelet[2900]: E0515 11:56:44.201375 2900 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4334.0.0-a-59732b8df3\" not found" May 15 11:56:44.222534 kubelet[2900]: I0515 11:56:44.221811 2900 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 11:56:44.223238 kubelet[2900]: I0515 11:56:44.223210 2900 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 11:56:44.223238 kubelet[2900]: I0515 11:56:44.223240 2900 status_manager.go:227] "Starting to sync pod status with apiserver" May 15 11:56:44.223302 kubelet[2900]: I0515 11:56:44.223252 2900 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 11:56:44.223302 kubelet[2900]: I0515 11:56:44.223257 2900 kubelet.go:2388] "Starting kubelet main sync loop" May 15 11:56:44.223302 kubelet[2900]: E0515 11:56:44.223284 2900 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 11:56:44.223776 kubelet[2900]: I0515 11:56:44.223590 2900 policy_none.go:49] "None policy: Start" May 15 11:56:44.223776 kubelet[2900]: I0515 11:56:44.223607 2900 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 11:56:44.223776 kubelet[2900]: I0515 11:56:44.223615 2900 state_mem.go:35] "Initializing new in-memory state store" May 15 11:56:44.225285 kubelet[2900]: W0515 11:56:44.225249 2900 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.23:6443: connect: connection refused May 15 11:56:44.225285 kubelet[2900]: E0515 11:56:44.225284 2900 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.23:6443: connect: connection refused" logger="UnhandledError" May 15 11:56:44.230037 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 15 11:56:44.242512 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 15 11:56:44.244985 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 15 11:56:44.259244 kubelet[2900]: I0515 11:56:44.259183 2900 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 11:56:44.259798 kubelet[2900]: I0515 11:56:44.259426 2900 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 11:56:44.259798 kubelet[2900]: I0515 11:56:44.259459 2900 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 11:56:44.259798 kubelet[2900]: I0515 11:56:44.259712 2900 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 11:56:44.260989 kubelet[2900]: E0515 11:56:44.260930 2900 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 11:56:44.261166 kubelet[2900]: E0515 11:56:44.261138 2900 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4334.0.0-a-59732b8df3\" not found" May 15 11:56:44.302735 kubelet[2900]: E0515 11:56:44.302696 2900 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334.0.0-a-59732b8df3?timeout=10s\": dial tcp 10.200.20.23:6443: connect: connection refused" interval="400ms" May 15 11:56:44.331186 systemd[1]: Created slice kubepods-burstable-pod38dfd00ddc7a7e1241e1fc6b22e80c01.slice - libcontainer container kubepods-burstable-pod38dfd00ddc7a7e1241e1fc6b22e80c01.slice. May 15 11:56:44.345577 kubelet[2900]: E0515 11:56:44.345529 2900 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4334.0.0-a-59732b8df3\" not found" node="ci-4334.0.0-a-59732b8df3" May 15 11:56:44.347955 systemd[1]: Created slice kubepods-burstable-pod9b46f1950ac50b690d85618c38fd8026.slice - libcontainer container kubepods-burstable-pod9b46f1950ac50b690d85618c38fd8026.slice. May 15 11:56:44.349990 kubelet[2900]: E0515 11:56:44.349920 2900 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4334.0.0-a-59732b8df3\" not found" node="ci-4334.0.0-a-59732b8df3" May 15 11:56:44.351684 systemd[1]: Created slice kubepods-burstable-pod6274bfcc2789edc77ba03a23523017b2.slice - libcontainer container kubepods-burstable-pod6274bfcc2789edc77ba03a23523017b2.slice. May 15 11:56:44.352926 kubelet[2900]: E0515 11:56:44.352908 2900 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4334.0.0-a-59732b8df3\" not found" node="ci-4334.0.0-a-59732b8df3" May 15 11:56:44.360916 kubelet[2900]: I0515 11:56:44.360892 2900 kubelet_node_status.go:76] "Attempting to register node" node="ci-4334.0.0-a-59732b8df3" May 15 11:56:44.361318 kubelet[2900]: E0515 11:56:44.361287 2900 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.23:6443/api/v1/nodes\": dial tcp 10.200.20.23:6443: connect: connection refused" node="ci-4334.0.0-a-59732b8df3" May 15 11:56:44.403776 kubelet[2900]: I0515 11:56:44.403602 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6274bfcc2789edc77ba03a23523017b2-k8s-certs\") pod \"kube-apiserver-ci-4334.0.0-a-59732b8df3\" (UID: \"6274bfcc2789edc77ba03a23523017b2\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-59732b8df3" May 15 11:56:44.403776 kubelet[2900]: I0515 11:56:44.403634 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/38dfd00ddc7a7e1241e1fc6b22e80c01-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4334.0.0-a-59732b8df3\" (UID: \"38dfd00ddc7a7e1241e1fc6b22e80c01\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-59732b8df3" May 15 11:56:44.403776 kubelet[2900]: I0515 11:56:44.403649 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b46f1950ac50b690d85618c38fd8026-kubeconfig\") pod \"kube-scheduler-ci-4334.0.0-a-59732b8df3\" (UID: \"9b46f1950ac50b690d85618c38fd8026\") " pod="kube-system/kube-scheduler-ci-4334.0.0-a-59732b8df3" May 15 11:56:44.403776 kubelet[2900]: I0515 11:56:44.403659 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/38dfd00ddc7a7e1241e1fc6b22e80c01-k8s-certs\") pod \"kube-controller-manager-ci-4334.0.0-a-59732b8df3\" (UID: \"38dfd00ddc7a7e1241e1fc6b22e80c01\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-59732b8df3" May 15 11:56:44.403776 kubelet[2900]: I0515 11:56:44.403670 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/38dfd00ddc7a7e1241e1fc6b22e80c01-kubeconfig\") pod \"kube-controller-manager-ci-4334.0.0-a-59732b8df3\" (UID: \"38dfd00ddc7a7e1241e1fc6b22e80c01\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-59732b8df3" May 15 11:56:44.403944 kubelet[2900]: I0515 11:56:44.403680 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6274bfcc2789edc77ba03a23523017b2-ca-certs\") pod \"kube-apiserver-ci-4334.0.0-a-59732b8df3\" (UID: \"6274bfcc2789edc77ba03a23523017b2\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-59732b8df3" May 15 11:56:44.403944 kubelet[2900]: I0515 11:56:44.403698 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6274bfcc2789edc77ba03a23523017b2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4334.0.0-a-59732b8df3\" (UID: \"6274bfcc2789edc77ba03a23523017b2\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-59732b8df3" May 15 11:56:44.403944 kubelet[2900]: I0515 11:56:44.403707 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/38dfd00ddc7a7e1241e1fc6b22e80c01-ca-certs\") pod \"kube-controller-manager-ci-4334.0.0-a-59732b8df3\" (UID: \"38dfd00ddc7a7e1241e1fc6b22e80c01\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-59732b8df3" May 15 11:56:44.403944 kubelet[2900]: I0515 11:56:44.403720 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/38dfd00ddc7a7e1241e1fc6b22e80c01-flexvolume-dir\") pod \"kube-controller-manager-ci-4334.0.0-a-59732b8df3\" (UID: \"38dfd00ddc7a7e1241e1fc6b22e80c01\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-59732b8df3" May 15 11:56:44.563036 kubelet[2900]: I0515 11:56:44.562797 2900 kubelet_node_status.go:76] "Attempting to register node" node="ci-4334.0.0-a-59732b8df3" May 15 11:56:44.563233 kubelet[2900]: E0515 11:56:44.563211 2900 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.23:6443/api/v1/nodes\": dial tcp 10.200.20.23:6443: connect: connection refused" node="ci-4334.0.0-a-59732b8df3" May 15 11:56:44.646867 containerd[1874]: time="2025-05-15T11:56:44.646775963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4334.0.0-a-59732b8df3,Uid:38dfd00ddc7a7e1241e1fc6b22e80c01,Namespace:kube-system,Attempt:0,}" May 15 11:56:44.651271 containerd[1874]: time="2025-05-15T11:56:44.651179677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4334.0.0-a-59732b8df3,Uid:9b46f1950ac50b690d85618c38fd8026,Namespace:kube-system,Attempt:0,}" May 15 11:56:44.653820 containerd[1874]: time="2025-05-15T11:56:44.653793540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4334.0.0-a-59732b8df3,Uid:6274bfcc2789edc77ba03a23523017b2,Namespace:kube-system,Attempt:0,}" May 15 11:56:44.703635 kubelet[2900]: E0515 11:56:44.703608 2900 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334.0.0-a-59732b8df3?timeout=10s\": dial tcp 10.200.20.23:6443: connect: connection refused" interval="800ms" May 15 11:56:44.724040 containerd[1874]: time="2025-05-15T11:56:44.724011035Z" level=info msg="connecting to shim d1ec473fdebf23017d2a27355a9d45ccb7c68d44688d93df351d55b96bec351e" address="unix:///run/containerd/s/eedb16c913749185ec3f7d58136804e70c653a11936db25ab6e4a42e73845fdd" namespace=k8s.io protocol=ttrpc version=3 May 15 11:56:44.740566 systemd[1]: Started cri-containerd-d1ec473fdebf23017d2a27355a9d45ccb7c68d44688d93df351d55b96bec351e.scope - libcontainer container d1ec473fdebf23017d2a27355a9d45ccb7c68d44688d93df351d55b96bec351e. May 15 11:56:44.756511 containerd[1874]: time="2025-05-15T11:56:44.755954162Z" level=info msg="connecting to shim 700a15d9754c93fb5fc577e849890cae974ac04935874539b0c5f1765a3d35a4" address="unix:///run/containerd/s/ada4a5f08fed38463df86145c6d0f781dfade4df2981747a3a1513b81ca414b5" namespace=k8s.io protocol=ttrpc version=3 May 15 11:56:44.756947 containerd[1874]: time="2025-05-15T11:56:44.756767470Z" level=info msg="connecting to shim b90320344ef1c69ae441a2510a70161785dd670fdcaa97cd408e673a1a0b5def" address="unix:///run/containerd/s/90194dd2d57eef6e28fed3cc2d9b409d398b51b6d7d57b612604068cca967ed2" namespace=k8s.io protocol=ttrpc version=3 May 15 11:56:44.778553 systemd[1]: Started cri-containerd-700a15d9754c93fb5fc577e849890cae974ac04935874539b0c5f1765a3d35a4.scope - libcontainer container 700a15d9754c93fb5fc577e849890cae974ac04935874539b0c5f1765a3d35a4. May 15 11:56:44.784544 containerd[1874]: time="2025-05-15T11:56:44.784495744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4334.0.0-a-59732b8df3,Uid:38dfd00ddc7a7e1241e1fc6b22e80c01,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1ec473fdebf23017d2a27355a9d45ccb7c68d44688d93df351d55b96bec351e\"" May 15 11:56:44.790886 containerd[1874]: time="2025-05-15T11:56:44.790396126Z" level=info msg="CreateContainer within sandbox \"d1ec473fdebf23017d2a27355a9d45ccb7c68d44688d93df351d55b96bec351e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 11:56:44.798708 systemd[1]: Started cri-containerd-b90320344ef1c69ae441a2510a70161785dd670fdcaa97cd408e673a1a0b5def.scope - libcontainer container b90320344ef1c69ae441a2510a70161785dd670fdcaa97cd408e673a1a0b5def. May 15 11:56:44.813241 containerd[1874]: time="2025-05-15T11:56:44.813141161Z" level=info msg="Container e85e4e784c2f3dd41e2c01639219492255cbc1452bf17752cf2ef127197fa0ad: CDI devices from CRI Config.CDIDevices: []" May 15 11:56:44.814969 containerd[1874]: time="2025-05-15T11:56:44.814919059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4334.0.0-a-59732b8df3,Uid:6274bfcc2789edc77ba03a23523017b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"700a15d9754c93fb5fc577e849890cae974ac04935874539b0c5f1765a3d35a4\"" May 15 11:56:44.817072 containerd[1874]: time="2025-05-15T11:56:44.817054271Z" level=info msg="CreateContainer within sandbox \"700a15d9754c93fb5fc577e849890cae974ac04935874539b0c5f1765a3d35a4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 11:56:44.831355 containerd[1874]: time="2025-05-15T11:56:44.831319677Z" level=info msg="CreateContainer within sandbox \"d1ec473fdebf23017d2a27355a9d45ccb7c68d44688d93df351d55b96bec351e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e85e4e784c2f3dd41e2c01639219492255cbc1452bf17752cf2ef127197fa0ad\"" May 15 11:56:44.832051 containerd[1874]: time="2025-05-15T11:56:44.832014462Z" level=info msg="StartContainer for \"e85e4e784c2f3dd41e2c01639219492255cbc1452bf17752cf2ef127197fa0ad\"" May 15 11:56:44.832805 containerd[1874]: time="2025-05-15T11:56:44.832762448Z" level=info msg="connecting to shim e85e4e784c2f3dd41e2c01639219492255cbc1452bf17752cf2ef127197fa0ad" address="unix:///run/containerd/s/eedb16c913749185ec3f7d58136804e70c653a11936db25ab6e4a42e73845fdd" protocol=ttrpc version=3 May 15 11:56:44.842167 containerd[1874]: time="2025-05-15T11:56:44.842033575Z" level=info msg="Container 6ea28afa2f7fddc199406de6a8e61fc28015a65e46f6feee2c16033ec43d4f77: CDI devices from CRI Config.CDIDevices: []" May 15 11:56:44.843448 containerd[1874]: time="2025-05-15T11:56:44.843330206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4334.0.0-a-59732b8df3,Uid:9b46f1950ac50b690d85618c38fd8026,Namespace:kube-system,Attempt:0,} returns sandbox id \"b90320344ef1c69ae441a2510a70161785dd670fdcaa97cd408e673a1a0b5def\"" May 15 11:56:44.845372 containerd[1874]: time="2025-05-15T11:56:44.845347694Z" level=info msg="CreateContainer within sandbox \"b90320344ef1c69ae441a2510a70161785dd670fdcaa97cd408e673a1a0b5def\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 11:56:44.847552 systemd[1]: Started cri-containerd-e85e4e784c2f3dd41e2c01639219492255cbc1452bf17752cf2ef127197fa0ad.scope - libcontainer container e85e4e784c2f3dd41e2c01639219492255cbc1452bf17752cf2ef127197fa0ad. May 15 11:56:44.859921 containerd[1874]: time="2025-05-15T11:56:44.859894364Z" level=info msg="CreateContainer within sandbox \"700a15d9754c93fb5fc577e849890cae974ac04935874539b0c5f1765a3d35a4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6ea28afa2f7fddc199406de6a8e61fc28015a65e46f6feee2c16033ec43d4f77\"" May 15 11:56:44.861293 containerd[1874]: time="2025-05-15T11:56:44.860524691Z" level=info msg="StartContainer for \"6ea28afa2f7fddc199406de6a8e61fc28015a65e46f6feee2c16033ec43d4f77\"" May 15 11:56:44.861293 containerd[1874]: time="2025-05-15T11:56:44.861118889Z" level=info msg="connecting to shim 6ea28afa2f7fddc199406de6a8e61fc28015a65e46f6feee2c16033ec43d4f77" address="unix:///run/containerd/s/ada4a5f08fed38463df86145c6d0f781dfade4df2981747a3a1513b81ca414b5" protocol=ttrpc version=3 May 15 11:56:44.878242 systemd[1]: Started cri-containerd-6ea28afa2f7fddc199406de6a8e61fc28015a65e46f6feee2c16033ec43d4f77.scope - libcontainer container 6ea28afa2f7fddc199406de6a8e61fc28015a65e46f6feee2c16033ec43d4f77. May 15 11:56:44.885186 containerd[1874]: time="2025-05-15T11:56:44.885165019Z" level=info msg="Container c711081a7441cb6b648e70d65cef4fae77225dd9b62cc03a836ae72aea286cde: CDI devices from CRI Config.CDIDevices: []" May 15 11:56:44.888806 containerd[1874]: time="2025-05-15T11:56:44.888770690Z" level=info msg="StartContainer for \"e85e4e784c2f3dd41e2c01639219492255cbc1452bf17752cf2ef127197fa0ad\" returns successfully" May 15 11:56:44.905673 containerd[1874]: time="2025-05-15T11:56:44.905230069Z" level=info msg="CreateContainer within sandbox \"b90320344ef1c69ae441a2510a70161785dd670fdcaa97cd408e673a1a0b5def\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c711081a7441cb6b648e70d65cef4fae77225dd9b62cc03a836ae72aea286cde\"" May 15 11:56:44.906326 containerd[1874]: time="2025-05-15T11:56:44.905911030Z" level=info msg="StartContainer for \"c711081a7441cb6b648e70d65cef4fae77225dd9b62cc03a836ae72aea286cde\"" May 15 11:56:44.907452 containerd[1874]: time="2025-05-15T11:56:44.906692248Z" level=info msg="connecting to shim c711081a7441cb6b648e70d65cef4fae77225dd9b62cc03a836ae72aea286cde" address="unix:///run/containerd/s/90194dd2d57eef6e28fed3cc2d9b409d398b51b6d7d57b612604068cca967ed2" protocol=ttrpc version=3 May 15 11:56:44.918361 containerd[1874]: time="2025-05-15T11:56:44.918330184Z" level=info msg="StartContainer for \"6ea28afa2f7fddc199406de6a8e61fc28015a65e46f6feee2c16033ec43d4f77\" returns successfully" May 15 11:56:44.928685 systemd[1]: Started cri-containerd-c711081a7441cb6b648e70d65cef4fae77225dd9b62cc03a836ae72aea286cde.scope - libcontainer container c711081a7441cb6b648e70d65cef4fae77225dd9b62cc03a836ae72aea286cde. May 15 11:56:44.964592 kubelet[2900]: I0515 11:56:44.964569 2900 kubelet_node_status.go:76] "Attempting to register node" node="ci-4334.0.0-a-59732b8df3" May 15 11:56:45.043096 containerd[1874]: time="2025-05-15T11:56:45.043055261Z" level=info msg="StartContainer for \"c711081a7441cb6b648e70d65cef4fae77225dd9b62cc03a836ae72aea286cde\" returns successfully" May 15 11:56:45.232046 kubelet[2900]: E0515 11:56:45.231808 2900 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4334.0.0-a-59732b8df3\" not found" node="ci-4334.0.0-a-59732b8df3" May 15 11:56:45.236684 kubelet[2900]: E0515 11:56:45.236347 2900 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4334.0.0-a-59732b8df3\" not found" node="ci-4334.0.0-a-59732b8df3" May 15 11:56:45.237558 kubelet[2900]: E0515 11:56:45.237432 2900 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4334.0.0-a-59732b8df3\" not found" node="ci-4334.0.0-a-59732b8df3" May 15 11:56:46.002164 kubelet[2900]: E0515 11:56:46.002110 2900 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4334.0.0-a-59732b8df3\" not found" node="ci-4334.0.0-a-59732b8df3" May 15 11:56:46.092926 kubelet[2900]: I0515 11:56:46.092782 2900 apiserver.go:52] "Watching apiserver" May 15 11:56:46.103797 kubelet[2900]: I0515 11:56:46.102970 2900 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 11:56:46.185049 kubelet[2900]: I0515 11:56:46.185015 2900 kubelet_node_status.go:79] "Successfully registered node" node="ci-4334.0.0-a-59732b8df3" May 15 11:56:46.200989 kubelet[2900]: I0515 11:56:46.200958 2900 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-59732b8df3" May 15 11:56:46.210762 kubelet[2900]: E0515 11:56:46.210733 2900 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4334.0.0-a-59732b8df3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4334.0.0-a-59732b8df3" May 15 11:56:46.210762 kubelet[2900]: I0515 11:56:46.210758 2900 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-59732b8df3" May 15 11:56:46.214573 kubelet[2900]: E0515 11:56:46.214540 2900 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4334.0.0-a-59732b8df3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4334.0.0-a-59732b8df3" May 15 11:56:46.214573 kubelet[2900]: I0515 11:56:46.214567 2900 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-59732b8df3" May 15 11:56:46.216083 kubelet[2900]: E0515 11:56:46.216049 2900 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4334.0.0-a-59732b8df3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-59732b8df3" May 15 11:56:46.238336 kubelet[2900]: I0515 11:56:46.238313 2900 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-59732b8df3" May 15 11:56:46.238628 kubelet[2900]: I0515 11:56:46.238608 2900 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-59732b8df3" May 15 11:56:46.241221 kubelet[2900]: E0515 11:56:46.241158 2900 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4334.0.0-a-59732b8df3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4334.0.0-a-59732b8df3" May 15 11:56:46.242388 kubelet[2900]: E0515 11:56:46.242344 2900 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4334.0.0-a-59732b8df3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4334.0.0-a-59732b8df3" May 15 11:56:48.211925 systemd[1]: Reload requested from client PID 3170 ('systemctl') (unit session-9.scope)... May 15 11:56:48.211938 systemd[1]: Reloading... May 15 11:56:48.284512 zram_generator::config[3224]: No configuration found. May 15 11:56:48.338125 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 11:56:48.426389 systemd[1]: Reloading finished in 214 ms. May 15 11:56:48.448541 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 11:56:48.460260 systemd[1]: kubelet.service: Deactivated successfully. May 15 11:56:48.460430 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 11:56:48.460500 systemd[1]: kubelet.service: Consumed 549ms CPU time, 121M memory peak. May 15 11:56:48.461675 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 11:56:48.543747 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 11:56:48.549670 (kubelet)[3279]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 11:56:48.578387 kubelet[3279]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 11:56:48.578387 kubelet[3279]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 11:56:48.578387 kubelet[3279]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 11:56:48.578603 kubelet[3279]: I0515 11:56:48.578428 3279 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 11:56:48.583687 kubelet[3279]: I0515 11:56:48.582840 3279 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 15 11:56:48.583687 kubelet[3279]: I0515 11:56:48.582858 3279 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 11:56:48.583687 kubelet[3279]: I0515 11:56:48.583148 3279 server.go:954] "Client rotation is on, will bootstrap in background" May 15 11:56:48.584508 kubelet[3279]: I0515 11:56:48.584488 3279 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 11:56:48.586011 kubelet[3279]: I0515 11:56:48.585990 3279 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 11:56:48.589360 kubelet[3279]: I0515 11:56:48.589339 3279 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 15 11:56:48.591695 kubelet[3279]: I0515 11:56:48.591680 3279 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 11:56:48.591852 kubelet[3279]: I0515 11:56:48.591831 3279 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 11:56:48.591957 kubelet[3279]: I0515 11:56:48.591850 3279 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4334.0.0-a-59732b8df3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 11:56:48.592015 kubelet[3279]: I0515 11:56:48.591962 3279 topology_manager.go:138] "Creating topology manager with none policy" May 15 11:56:48.592015 kubelet[3279]: I0515 11:56:48.591968 3279 container_manager_linux.go:304] "Creating device plugin manager" May 15 11:56:48.592015 kubelet[3279]: I0515 11:56:48.592002 3279 state_mem.go:36] "Initialized new in-memory state store" May 15 11:56:48.592487 kubelet[3279]: I0515 11:56:48.592081 3279 kubelet.go:446] "Attempting to sync node with API server" May 15 11:56:48.592487 kubelet[3279]: I0515 11:56:48.592092 3279 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 11:56:48.592487 kubelet[3279]: I0515 11:56:48.592125 3279 kubelet.go:352] "Adding apiserver pod source" May 15 11:56:48.592487 kubelet[3279]: I0515 11:56:48.592135 3279 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 11:56:48.592831 kubelet[3279]: I0515 11:56:48.592818 3279 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 15 11:56:48.593198 kubelet[3279]: I0515 11:56:48.593182 3279 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 11:56:48.593589 kubelet[3279]: I0515 11:56:48.593570 3279 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 11:56:48.593673 kubelet[3279]: I0515 11:56:48.593665 3279 server.go:1287] "Started kubelet" May 15 11:56:48.595649 kubelet[3279]: I0515 11:56:48.595638 3279 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 11:56:48.596547 kubelet[3279]: I0515 11:56:48.596512 3279 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 15 11:56:48.597294 kubelet[3279]: I0515 11:56:48.597263 3279 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 11:56:48.597562 kubelet[3279]: I0515 11:56:48.597548 3279 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 11:56:48.598009 kubelet[3279]: I0515 11:56:48.597988 3279 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 11:56:48.600520 kubelet[3279]: I0515 11:56:48.599865 3279 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 11:56:48.603162 kubelet[3279]: E0515 11:56:48.602247 3279 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4334.0.0-a-59732b8df3\" not found" May 15 11:56:48.604907 kubelet[3279]: I0515 11:56:48.604206 3279 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 11:56:48.605037 kubelet[3279]: I0515 11:56:48.600477 3279 server.go:490] "Adding debug handlers to kubelet server" May 15 11:56:48.607904 kubelet[3279]: I0515 11:56:48.607725 3279 reconciler.go:26] "Reconciler: start to sync state" May 15 11:56:48.615044 kubelet[3279]: I0515 11:56:48.615029 3279 factory.go:221] Registration of the systemd container factory successfully May 15 11:56:48.615222 kubelet[3279]: I0515 11:56:48.615206 3279 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 11:56:48.618196 kubelet[3279]: I0515 11:56:48.617226 3279 factory.go:221] Registration of the containerd container factory successfully May 15 11:56:48.622406 kubelet[3279]: I0515 11:56:48.621528 3279 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 11:56:48.622406 kubelet[3279]: I0515 11:56:48.622247 3279 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 11:56:48.622406 kubelet[3279]: I0515 11:56:48.622270 3279 status_manager.go:227] "Starting to sync pod status with apiserver" May 15 11:56:48.622406 kubelet[3279]: I0515 11:56:48.622285 3279 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 11:56:48.622406 kubelet[3279]: I0515 11:56:48.622289 3279 kubelet.go:2388] "Starting kubelet main sync loop" May 15 11:56:48.622406 kubelet[3279]: E0515 11:56:48.622320 3279 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 11:56:48.662091 kubelet[3279]: I0515 11:56:48.662071 3279 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 11:56:48.662091 kubelet[3279]: I0515 11:56:48.662085 3279 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 11:56:48.662201 kubelet[3279]: I0515 11:56:48.662102 3279 state_mem.go:36] "Initialized new in-memory state store" May 15 11:56:48.662219 kubelet[3279]: I0515 11:56:48.662204 3279 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 11:56:48.662219 kubelet[3279]: I0515 11:56:48.662211 3279 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 11:56:48.662246 kubelet[3279]: I0515 11:56:48.662224 3279 policy_none.go:49] "None policy: Start" May 15 11:56:48.662246 kubelet[3279]: I0515 11:56:48.662232 3279 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 11:56:48.662246 kubelet[3279]: I0515 11:56:48.662239 3279 state_mem.go:35] "Initializing new in-memory state store" May 15 11:56:48.662312 kubelet[3279]: I0515 11:56:48.662299 3279 state_mem.go:75] "Updated machine memory state" May 15 11:56:48.665748 kubelet[3279]: I0515 11:56:48.665731 3279 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 11:56:48.665947 kubelet[3279]: I0515 11:56:48.665924 3279 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 11:56:48.666092 kubelet[3279]: I0515 11:56:48.666065 3279 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 11:56:48.666746 kubelet[3279]: I0515 11:56:48.666733 3279 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 11:56:48.667958 kubelet[3279]: E0515 11:56:48.667854 3279 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 11:56:48.723151 kubelet[3279]: I0515 11:56:48.723081 3279 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-59732b8df3" May 15 11:56:48.723728 kubelet[3279]: I0515 11:56:48.723313 3279 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-59732b8df3" May 15 11:56:48.723728 kubelet[3279]: I0515 11:56:48.723373 3279 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-59732b8df3" May 15 11:56:48.730228 kubelet[3279]: W0515 11:56:48.730214 3279 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 11:56:48.732997 kubelet[3279]: W0515 11:56:48.732897 3279 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 11:56:48.733217 kubelet[3279]: W0515 11:56:48.733082 3279 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 11:56:48.768594 kubelet[3279]: I0515 11:56:48.768577 3279 kubelet_node_status.go:76] "Attempting to register node" node="ci-4334.0.0-a-59732b8df3" May 15 11:56:48.780814 kubelet[3279]: I0515 11:56:48.780788 3279 kubelet_node_status.go:125] "Node was previously registered" node="ci-4334.0.0-a-59732b8df3" May 15 11:56:48.780917 kubelet[3279]: I0515 11:56:48.780854 3279 kubelet_node_status.go:79] "Successfully registered node" node="ci-4334.0.0-a-59732b8df3" May 15 11:56:48.809791 kubelet[3279]: I0515 11:56:48.809733 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/38dfd00ddc7a7e1241e1fc6b22e80c01-kubeconfig\") pod \"kube-controller-manager-ci-4334.0.0-a-59732b8df3\" (UID: \"38dfd00ddc7a7e1241e1fc6b22e80c01\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-59732b8df3" May 15 11:56:48.809791 kubelet[3279]: I0515 11:56:48.809760 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6274bfcc2789edc77ba03a23523017b2-k8s-certs\") pod \"kube-apiserver-ci-4334.0.0-a-59732b8df3\" (UID: \"6274bfcc2789edc77ba03a23523017b2\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-59732b8df3" May 15 11:56:48.809791 kubelet[3279]: I0515 11:56:48.809772 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/38dfd00ddc7a7e1241e1fc6b22e80c01-ca-certs\") pod \"kube-controller-manager-ci-4334.0.0-a-59732b8df3\" (UID: \"38dfd00ddc7a7e1241e1fc6b22e80c01\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-59732b8df3" May 15 11:56:48.810011 kubelet[3279]: I0515 11:56:48.809909 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/38dfd00ddc7a7e1241e1fc6b22e80c01-k8s-certs\") pod \"kube-controller-manager-ci-4334.0.0-a-59732b8df3\" (UID: \"38dfd00ddc7a7e1241e1fc6b22e80c01\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-59732b8df3" May 15 11:56:48.810011 kubelet[3279]: I0515 11:56:48.809927 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/38dfd00ddc7a7e1241e1fc6b22e80c01-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4334.0.0-a-59732b8df3\" (UID: \"38dfd00ddc7a7e1241e1fc6b22e80c01\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-59732b8df3" May 15 11:56:48.810011 kubelet[3279]: I0515 11:56:48.810058 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b46f1950ac50b690d85618c38fd8026-kubeconfig\") pod \"kube-scheduler-ci-4334.0.0-a-59732b8df3\" (UID: \"9b46f1950ac50b690d85618c38fd8026\") " pod="kube-system/kube-scheduler-ci-4334.0.0-a-59732b8df3" May 15 11:56:48.810011 kubelet[3279]: I0515 11:56:48.810074 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6274bfcc2789edc77ba03a23523017b2-ca-certs\") pod \"kube-apiserver-ci-4334.0.0-a-59732b8df3\" (UID: \"6274bfcc2789edc77ba03a23523017b2\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-59732b8df3" May 15 11:56:48.810011 kubelet[3279]: I0515 11:56:48.810086 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6274bfcc2789edc77ba03a23523017b2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4334.0.0-a-59732b8df3\" (UID: \"6274bfcc2789edc77ba03a23523017b2\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-59732b8df3" May 15 11:56:48.810532 kubelet[3279]: I0515 11:56:48.810476 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/38dfd00ddc7a7e1241e1fc6b22e80c01-flexvolume-dir\") pod \"kube-controller-manager-ci-4334.0.0-a-59732b8df3\" (UID: \"38dfd00ddc7a7e1241e1fc6b22e80c01\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-59732b8df3" May 15 11:56:49.599064 kubelet[3279]: I0515 11:56:49.599024 3279 apiserver.go:52] "Watching apiserver" May 15 11:56:49.605447 kubelet[3279]: I0515 11:56:49.605416 3279 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 11:56:49.676091 kubelet[3279]: I0515 11:56:49.675887 3279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4334.0.0-a-59732b8df3" podStartSLOduration=1.675877695 podStartE2EDuration="1.675877695s" podCreationTimestamp="2025-05-15 11:56:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 11:56:49.666447026 +0000 UTC m=+1.114277428" watchObservedRunningTime="2025-05-15 11:56:49.675877695 +0000 UTC m=+1.123708097" May 15 11:56:49.685348 kubelet[3279]: I0515 11:56:49.685145 3279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4334.0.0-a-59732b8df3" podStartSLOduration=1.685135024 podStartE2EDuration="1.685135024s" podCreationTimestamp="2025-05-15 11:56:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 11:56:49.676014762 +0000 UTC m=+1.123845164" watchObservedRunningTime="2025-05-15 11:56:49.685135024 +0000 UTC m=+1.132965426" May 15 11:56:49.698622 kubelet[3279]: I0515 11:56:49.698511 3279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-59732b8df3" podStartSLOduration=1.698500232 podStartE2EDuration="1.698500232s" podCreationTimestamp="2025-05-15 11:56:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 11:56:49.686233275 +0000 UTC m=+1.134063685" watchObservedRunningTime="2025-05-15 11:56:49.698500232 +0000 UTC m=+1.146330682" May 15 11:56:50.429599 kernel: hv_balloon: Max. dynamic memory size: 4096 MB May 15 11:56:52.688658 update_engine[1858]: I20250515 11:56:52.688591 1858 update_attempter.cc:509] Updating boot flags... May 15 11:56:53.228734 sudo[2336]: pam_unix(sudo:session): session closed for user root May 15 11:56:53.317956 sshd[2335]: Connection closed by 10.200.16.10 port 47776 May 15 11:56:53.317885 sshd-session[2333]: pam_unix(sshd:session): session closed for user core May 15 11:56:53.320311 systemd[1]: sshd@6-10.200.20.23:22-10.200.16.10:47776.service: Deactivated successfully. May 15 11:56:53.322733 systemd[1]: session-9.scope: Deactivated successfully. May 15 11:56:53.322927 systemd[1]: session-9.scope: Consumed 2.711s CPU time, 231.4M memory peak. May 15 11:56:53.325836 systemd-logind[1854]: Session 9 logged out. Waiting for processes to exit. May 15 11:56:53.327010 systemd-logind[1854]: Removed session 9. May 15 11:56:53.924050 kubelet[3279]: I0515 11:56:53.924021 3279 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 11:56:53.924681 kubelet[3279]: I0515 11:56:53.924371 3279 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 11:56:53.924717 containerd[1874]: time="2025-05-15T11:56:53.924246074Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 11:56:54.765483 systemd[1]: Created slice kubepods-besteffort-pod4d75a282_f86f_4652_8eac_41c1f048104c.slice - libcontainer container kubepods-besteffort-pod4d75a282_f86f_4652_8eac_41c1f048104c.slice. May 15 11:56:54.847155 kubelet[3279]: I0515 11:56:54.847119 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2mps\" (UniqueName: \"kubernetes.io/projected/4d75a282-f86f-4652-8eac-41c1f048104c-kube-api-access-d2mps\") pod \"kube-proxy-n8jzm\" (UID: \"4d75a282-f86f-4652-8eac-41c1f048104c\") " pod="kube-system/kube-proxy-n8jzm" May 15 11:56:54.847264 kubelet[3279]: I0515 11:56:54.847187 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4d75a282-f86f-4652-8eac-41c1f048104c-kube-proxy\") pod \"kube-proxy-n8jzm\" (UID: \"4d75a282-f86f-4652-8eac-41c1f048104c\") " pod="kube-system/kube-proxy-n8jzm" May 15 11:56:54.847264 kubelet[3279]: I0515 11:56:54.847228 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d75a282-f86f-4652-8eac-41c1f048104c-xtables-lock\") pod \"kube-proxy-n8jzm\" (UID: \"4d75a282-f86f-4652-8eac-41c1f048104c\") " pod="kube-system/kube-proxy-n8jzm" May 15 11:56:54.847264 kubelet[3279]: I0515 11:56:54.847244 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d75a282-f86f-4652-8eac-41c1f048104c-lib-modules\") pod \"kube-proxy-n8jzm\" (UID: \"4d75a282-f86f-4652-8eac-41c1f048104c\") " pod="kube-system/kube-proxy-n8jzm" May 15 11:56:55.033529 systemd[1]: Created slice kubepods-besteffort-pod4df418de_b118_405d_b8b7_ecacfcbeab25.slice - libcontainer container kubepods-besteffort-pod4df418de_b118_405d_b8b7_ecacfcbeab25.slice. May 15 11:56:55.048266 kubelet[3279]: I0515 11:56:55.048209 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4df418de-b118-405d-b8b7-ecacfcbeab25-var-lib-calico\") pod \"tigera-operator-789496d6f5-wxszg\" (UID: \"4df418de-b118-405d-b8b7-ecacfcbeab25\") " pod="tigera-operator/tigera-operator-789496d6f5-wxszg" May 15 11:56:55.048675 kubelet[3279]: I0515 11:56:55.048643 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhls6\" (UniqueName: \"kubernetes.io/projected/4df418de-b118-405d-b8b7-ecacfcbeab25-kube-api-access-hhls6\") pod \"tigera-operator-789496d6f5-wxszg\" (UID: \"4df418de-b118-405d-b8b7-ecacfcbeab25\") " pod="tigera-operator/tigera-operator-789496d6f5-wxszg" May 15 11:56:55.072778 containerd[1874]: time="2025-05-15T11:56:55.072734905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n8jzm,Uid:4d75a282-f86f-4652-8eac-41c1f048104c,Namespace:kube-system,Attempt:0,}" May 15 11:56:55.111100 containerd[1874]: time="2025-05-15T11:56:55.111065698Z" level=info msg="connecting to shim 8ad3be292ea574a700c90c57f40dc8eb8f83e7a2a7f153278063b244eed2e3ce" address="unix:///run/containerd/s/66428099bcfac6cec16a899f98179642199913e95247f218ce1504f2701f35e3" namespace=k8s.io protocol=ttrpc version=3 May 15 11:56:55.124555 systemd[1]: Started cri-containerd-8ad3be292ea574a700c90c57f40dc8eb8f83e7a2a7f153278063b244eed2e3ce.scope - libcontainer container 8ad3be292ea574a700c90c57f40dc8eb8f83e7a2a7f153278063b244eed2e3ce. May 15 11:56:55.143444 containerd[1874]: time="2025-05-15T11:56:55.143410125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n8jzm,Uid:4d75a282-f86f-4652-8eac-41c1f048104c,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ad3be292ea574a700c90c57f40dc8eb8f83e7a2a7f153278063b244eed2e3ce\"" May 15 11:56:55.145571 containerd[1874]: time="2025-05-15T11:56:55.145335240Z" level=info msg="CreateContainer within sandbox \"8ad3be292ea574a700c90c57f40dc8eb8f83e7a2a7f153278063b244eed2e3ce\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 11:56:55.167569 containerd[1874]: time="2025-05-15T11:56:55.167541104Z" level=info msg="Container 082b0728dd511ce9eb0e5ee7e1185f5c1bc951c1fe73c6fcaf6c1b33429f8dff: CDI devices from CRI Config.CDIDevices: []" May 15 11:56:55.182932 containerd[1874]: time="2025-05-15T11:56:55.182907044Z" level=info msg="CreateContainer within sandbox \"8ad3be292ea574a700c90c57f40dc8eb8f83e7a2a7f153278063b244eed2e3ce\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"082b0728dd511ce9eb0e5ee7e1185f5c1bc951c1fe73c6fcaf6c1b33429f8dff\"" May 15 11:56:55.183822 containerd[1874]: time="2025-05-15T11:56:55.183794788Z" level=info msg="StartContainer for \"082b0728dd511ce9eb0e5ee7e1185f5c1bc951c1fe73c6fcaf6c1b33429f8dff\"" May 15 11:56:55.184751 containerd[1874]: time="2025-05-15T11:56:55.184725172Z" level=info msg="connecting to shim 082b0728dd511ce9eb0e5ee7e1185f5c1bc951c1fe73c6fcaf6c1b33429f8dff" address="unix:///run/containerd/s/66428099bcfac6cec16a899f98179642199913e95247f218ce1504f2701f35e3" protocol=ttrpc version=3 May 15 11:56:55.202542 systemd[1]: Started cri-containerd-082b0728dd511ce9eb0e5ee7e1185f5c1bc951c1fe73c6fcaf6c1b33429f8dff.scope - libcontainer container 082b0728dd511ce9eb0e5ee7e1185f5c1bc951c1fe73c6fcaf6c1b33429f8dff. May 15 11:56:55.228070 containerd[1874]: time="2025-05-15T11:56:55.228042864Z" level=info msg="StartContainer for \"082b0728dd511ce9eb0e5ee7e1185f5c1bc951c1fe73c6fcaf6c1b33429f8dff\" returns successfully" May 15 11:56:55.338112 containerd[1874]: time="2025-05-15T11:56:55.337814329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-wxszg,Uid:4df418de-b118-405d-b8b7-ecacfcbeab25,Namespace:tigera-operator,Attempt:0,}" May 15 11:56:55.384232 containerd[1874]: time="2025-05-15T11:56:55.384172428Z" level=info msg="connecting to shim 07c3032b0ea085cf5198fc63f6ca3106b43c3b5e5450c495763ec6693152e10b" address="unix:///run/containerd/s/d787e347146e21c11b1900d882808a59b32c80bb8d1d47c2dff68db2f2a0d2e5" namespace=k8s.io protocol=ttrpc version=3 May 15 11:56:55.407558 systemd[1]: Started cri-containerd-07c3032b0ea085cf5198fc63f6ca3106b43c3b5e5450c495763ec6693152e10b.scope - libcontainer container 07c3032b0ea085cf5198fc63f6ca3106b43c3b5e5450c495763ec6693152e10b. May 15 11:56:55.434668 containerd[1874]: time="2025-05-15T11:56:55.434637772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-wxszg,Uid:4df418de-b118-405d-b8b7-ecacfcbeab25,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"07c3032b0ea085cf5198fc63f6ca3106b43c3b5e5450c495763ec6693152e10b\"" May 15 11:56:55.436292 containerd[1874]: time="2025-05-15T11:56:55.436257759Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 15 11:56:56.891813 kubelet[3279]: I0515 11:56:56.891658 3279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n8jzm" podStartSLOduration=2.89164443 podStartE2EDuration="2.89164443s" podCreationTimestamp="2025-05-15 11:56:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 11:56:55.671242966 +0000 UTC m=+7.119073376" watchObservedRunningTime="2025-05-15 11:56:56.89164443 +0000 UTC m=+8.339474832" May 15 11:56:57.552987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2313158943.mount: Deactivated successfully. May 15 11:56:58.162084 containerd[1874]: time="2025-05-15T11:56:58.161986120Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:56:58.163901 containerd[1874]: time="2025-05-15T11:56:58.163877026Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 15 11:56:58.167005 containerd[1874]: time="2025-05-15T11:56:58.166972684Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:56:58.172460 containerd[1874]: time="2025-05-15T11:56:58.172210485Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:56:58.172876 containerd[1874]: time="2025-05-15T11:56:58.172856830Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 2.736498253s" May 15 11:56:58.172956 containerd[1874]: time="2025-05-15T11:56:58.172941929Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 15 11:56:58.175611 containerd[1874]: time="2025-05-15T11:56:58.175591190Z" level=info msg="CreateContainer within sandbox \"07c3032b0ea085cf5198fc63f6ca3106b43c3b5e5450c495763ec6693152e10b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 15 11:56:58.198398 containerd[1874]: time="2025-05-15T11:56:58.198062478Z" level=info msg="Container b8529b587f96d97f32d82b93b83ffc14def169e7318c8b62851d929d72b941c4: CDI devices from CRI Config.CDIDevices: []" May 15 11:56:58.209783 containerd[1874]: time="2025-05-15T11:56:58.209754169Z" level=info msg="CreateContainer within sandbox \"07c3032b0ea085cf5198fc63f6ca3106b43c3b5e5450c495763ec6693152e10b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b8529b587f96d97f32d82b93b83ffc14def169e7318c8b62851d929d72b941c4\"" May 15 11:56:58.210740 containerd[1874]: time="2025-05-15T11:56:58.210676977Z" level=info msg="StartContainer for \"b8529b587f96d97f32d82b93b83ffc14def169e7318c8b62851d929d72b941c4\"" May 15 11:56:58.211541 containerd[1874]: time="2025-05-15T11:56:58.211513264Z" level=info msg="connecting to shim b8529b587f96d97f32d82b93b83ffc14def169e7318c8b62851d929d72b941c4" address="unix:///run/containerd/s/d787e347146e21c11b1900d882808a59b32c80bb8d1d47c2dff68db2f2a0d2e5" protocol=ttrpc version=3 May 15 11:56:58.230559 systemd[1]: Started cri-containerd-b8529b587f96d97f32d82b93b83ffc14def169e7318c8b62851d929d72b941c4.scope - libcontainer container b8529b587f96d97f32d82b93b83ffc14def169e7318c8b62851d929d72b941c4. May 15 11:56:58.251385 containerd[1874]: time="2025-05-15T11:56:58.251356696Z" level=info msg="StartContainer for \"b8529b587f96d97f32d82b93b83ffc14def169e7318c8b62851d929d72b941c4\" returns successfully" May 15 11:56:58.693457 kubelet[3279]: I0515 11:56:58.693384 3279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-789496d6f5-wxszg" podStartSLOduration=1.955688634 podStartE2EDuration="4.693369855s" podCreationTimestamp="2025-05-15 11:56:54 +0000 UTC" firstStartedPulling="2025-05-15 11:56:55.435750153 +0000 UTC m=+6.883580555" lastFinishedPulling="2025-05-15 11:56:58.173431374 +0000 UTC m=+9.621261776" observedRunningTime="2025-05-15 11:56:58.681373643 +0000 UTC m=+10.129204045" watchObservedRunningTime="2025-05-15 11:56:58.693369855 +0000 UTC m=+10.141200257" May 15 11:57:01.315595 systemd[1]: Created slice kubepods-besteffort-pod989213f0_576c_49e1_bd85_20d9ffa86519.slice - libcontainer container kubepods-besteffort-pod989213f0_576c_49e1_bd85_20d9ffa86519.slice. May 15 11:57:01.391645 kubelet[3279]: I0515 11:57:01.391490 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/989213f0-576c-49e1-bd85-20d9ffa86519-typha-certs\") pod \"calico-typha-59fbcfc7f5-xg4r6\" (UID: \"989213f0-576c-49e1-bd85-20d9ffa86519\") " pod="calico-system/calico-typha-59fbcfc7f5-xg4r6" May 15 11:57:01.391645 kubelet[3279]: I0515 11:57:01.391524 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/989213f0-576c-49e1-bd85-20d9ffa86519-tigera-ca-bundle\") pod \"calico-typha-59fbcfc7f5-xg4r6\" (UID: \"989213f0-576c-49e1-bd85-20d9ffa86519\") " pod="calico-system/calico-typha-59fbcfc7f5-xg4r6" May 15 11:57:01.391645 kubelet[3279]: I0515 11:57:01.391538 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7h6s\" (UniqueName: \"kubernetes.io/projected/989213f0-576c-49e1-bd85-20d9ffa86519-kube-api-access-j7h6s\") pod \"calico-typha-59fbcfc7f5-xg4r6\" (UID: \"989213f0-576c-49e1-bd85-20d9ffa86519\") " pod="calico-system/calico-typha-59fbcfc7f5-xg4r6" May 15 11:57:01.394856 systemd[1]: Created slice kubepods-besteffort-pod4ec052d2_3c37_4eb4_8e7c_fddf29cdf5d8.slice - libcontainer container kubepods-besteffort-pod4ec052d2_3c37_4eb4_8e7c_fddf29cdf5d8.slice. May 15 11:57:01.492874 kubelet[3279]: I0515 11:57:01.492847 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4ec052d2-3c37-4eb4-8e7c-fddf29cdf5d8-var-run-calico\") pod \"calico-node-qnppn\" (UID: \"4ec052d2-3c37-4eb4-8e7c-fddf29cdf5d8\") " pod="calico-system/calico-node-qnppn" May 15 11:57:01.493081 kubelet[3279]: I0515 11:57:01.493028 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4ec052d2-3c37-4eb4-8e7c-fddf29cdf5d8-cni-bin-dir\") pod \"calico-node-qnppn\" (UID: \"4ec052d2-3c37-4eb4-8e7c-fddf29cdf5d8\") " pod="calico-system/calico-node-qnppn" May 15 11:57:01.493081 kubelet[3279]: I0515 11:57:01.493050 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ec052d2-3c37-4eb4-8e7c-fddf29cdf5d8-lib-modules\") pod \"calico-node-qnppn\" (UID: \"4ec052d2-3c37-4eb4-8e7c-fddf29cdf5d8\") " pod="calico-system/calico-node-qnppn" May 15 11:57:01.493192 kubelet[3279]: I0515 11:57:01.493071 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4ec052d2-3c37-4eb4-8e7c-fddf29cdf5d8-node-certs\") pod \"calico-node-qnppn\" (UID: \"4ec052d2-3c37-4eb4-8e7c-fddf29cdf5d8\") " pod="calico-system/calico-node-qnppn" May 15 11:57:01.493447 kubelet[3279]: I0515 11:57:01.493317 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4ec052d2-3c37-4eb4-8e7c-fddf29cdf5d8-cni-log-dir\") pod \"calico-node-qnppn\" (UID: \"4ec052d2-3c37-4eb4-8e7c-fddf29cdf5d8\") " pod="calico-system/calico-node-qnppn" May 15 11:57:01.493447 kubelet[3279]: I0515 11:57:01.493337 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj5t9\" (UniqueName: \"kubernetes.io/projected/4ec052d2-3c37-4eb4-8e7c-fddf29cdf5d8-kube-api-access-vj5t9\") pod \"calico-node-qnppn\" (UID: \"4ec052d2-3c37-4eb4-8e7c-fddf29cdf5d8\") " pod="calico-system/calico-node-qnppn" May 15 11:57:01.493447 kubelet[3279]: I0515 11:57:01.493353 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4ec052d2-3c37-4eb4-8e7c-fddf29cdf5d8-xtables-lock\") pod \"calico-node-qnppn\" (UID: \"4ec052d2-3c37-4eb4-8e7c-fddf29cdf5d8\") " pod="calico-system/calico-node-qnppn" May 15 11:57:01.493447 kubelet[3279]: I0515 11:57:01.493362 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4ec052d2-3c37-4eb4-8e7c-fddf29cdf5d8-policysync\") pod \"calico-node-qnppn\" (UID: \"4ec052d2-3c37-4eb4-8e7c-fddf29cdf5d8\") " pod="calico-system/calico-node-qnppn" May 15 11:57:01.493447 kubelet[3279]: I0515 11:57:01.493371 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4ec052d2-3c37-4eb4-8e7c-fddf29cdf5d8-cni-net-dir\") pod \"calico-node-qnppn\" (UID: \"4ec052d2-3c37-4eb4-8e7c-fddf29cdf5d8\") " pod="calico-system/calico-node-qnppn" May 15 11:57:01.493761 kubelet[3279]: I0515 11:57:01.493639 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ec052d2-3c37-4eb4-8e7c-fddf29cdf5d8-tigera-ca-bundle\") pod \"calico-node-qnppn\" (UID: \"4ec052d2-3c37-4eb4-8e7c-fddf29cdf5d8\") " pod="calico-system/calico-node-qnppn" May 15 11:57:01.493761 kubelet[3279]: I0515 11:57:01.493663 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4ec052d2-3c37-4eb4-8e7c-fddf29cdf5d8-flexvol-driver-host\") pod \"calico-node-qnppn\" (UID: \"4ec052d2-3c37-4eb4-8e7c-fddf29cdf5d8\") " pod="calico-system/calico-node-qnppn" May 15 11:57:01.493761 kubelet[3279]: I0515 11:57:01.493677 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4ec052d2-3c37-4eb4-8e7c-fddf29cdf5d8-var-lib-calico\") pod \"calico-node-qnppn\" (UID: \"4ec052d2-3c37-4eb4-8e7c-fddf29cdf5d8\") " pod="calico-system/calico-node-qnppn" May 15 11:57:01.511613 kubelet[3279]: E0515 11:57:01.511577 3279 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rvch9" podUID="db9a440f-784b-44cc-94cc-545a3468a789" May 15 11:57:01.594709 kubelet[3279]: I0515 11:57:01.594615 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/db9a440f-784b-44cc-94cc-545a3468a789-socket-dir\") pod \"csi-node-driver-rvch9\" (UID: \"db9a440f-784b-44cc-94cc-545a3468a789\") " pod="calico-system/csi-node-driver-rvch9" May 15 11:57:01.594709 kubelet[3279]: I0515 11:57:01.594648 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr2hg\" (UniqueName: \"kubernetes.io/projected/db9a440f-784b-44cc-94cc-545a3468a789-kube-api-access-lr2hg\") pod \"csi-node-driver-rvch9\" (UID: \"db9a440f-784b-44cc-94cc-545a3468a789\") " pod="calico-system/csi-node-driver-rvch9" May 15 11:57:01.594709 kubelet[3279]: I0515 11:57:01.594692 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/db9a440f-784b-44cc-94cc-545a3468a789-kubelet-dir\") pod \"csi-node-driver-rvch9\" (UID: \"db9a440f-784b-44cc-94cc-545a3468a789\") " pod="calico-system/csi-node-driver-rvch9" May 15 11:57:01.594822 kubelet[3279]: I0515 11:57:01.594718 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/db9a440f-784b-44cc-94cc-545a3468a789-varrun\") pod \"csi-node-driver-rvch9\" (UID: \"db9a440f-784b-44cc-94cc-545a3468a789\") " pod="calico-system/csi-node-driver-rvch9" May 15 11:57:01.594822 kubelet[3279]: I0515 11:57:01.594728 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/db9a440f-784b-44cc-94cc-545a3468a789-registration-dir\") pod \"csi-node-driver-rvch9\" (UID: \"db9a440f-784b-44cc-94cc-545a3468a789\") " pod="calico-system/csi-node-driver-rvch9" May 15 11:57:01.602480 kubelet[3279]: E0515 11:57:01.600932 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:01.602480 kubelet[3279]: W0515 11:57:01.600952 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:01.602480 kubelet[3279]: E0515 11:57:01.600974 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:01.602691 kubelet[3279]: E0515 11:57:01.602678 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:01.605733 kubelet[3279]: W0515 11:57:01.604988 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:01.605733 kubelet[3279]: E0515 11:57:01.605018 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:01.606475 kubelet[3279]: E0515 11:57:01.606002 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:01.606475 kubelet[3279]: W0515 11:57:01.606015 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:01.606475 kubelet[3279]: E0515 11:57:01.606032 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:01.607036 kubelet[3279]: E0515 11:57:01.607022 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:01.607111 kubelet[3279]: W0515 11:57:01.607100 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:01.607164 kubelet[3279]: E0515 11:57:01.607154 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:01.610457 kubelet[3279]: E0515 11:57:01.610387 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:01.610457 kubelet[3279]: W0515 11:57:01.610404 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:01.610457 kubelet[3279]: E0515 11:57:01.610417 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:01.610575 kubelet[3279]: E0515 11:57:01.610561 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:01.610575 kubelet[3279]: W0515 11:57:01.610571 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:01.610613 kubelet[3279]: E0515 11:57:01.610579 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:01.610752 kubelet[3279]: E0515 11:57:01.610741 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:01.610752 kubelet[3279]: W0515 11:57:01.610750 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:01.610807 kubelet[3279]: E0515 11:57:01.610757 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:01.610922 kubelet[3279]: E0515 11:57:01.610845 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:01.610922 kubelet[3279]: W0515 11:57:01.610852 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:01.610922 kubelet[3279]: E0515 11:57:01.610858 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:01.610991 kubelet[3279]: E0515 11:57:01.610931 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:01.610991 kubelet[3279]: W0515 11:57:01.610936 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:01.610991 kubelet[3279]: E0515 11:57:01.610943 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:01.611033 kubelet[3279]: E0515 11:57:01.611009 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:01.611033 kubelet[3279]: W0515 11:57:01.611013 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:01.611033 kubelet[3279]: E0515 11:57:01.611017 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:01.611540 kubelet[3279]: E0515 11:57:01.611111 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:01.611540 kubelet[3279]: W0515 11:57:01.611116 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:01.611540 kubelet[3279]: E0515 11:57:01.611121 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:01.611540 kubelet[3279]: E0515 11:57:01.611190 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:01.611540 kubelet[3279]: W0515 11:57:01.611194 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:01.611540 kubelet[3279]: E0515 11:57:01.611199 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:01.611540 kubelet[3279]: E0515 11:57:01.611277 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:01.611540 kubelet[3279]: W0515 11:57:01.611282 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:01.611540 kubelet[3279]: E0515 11:57:01.611286 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:01.611540 kubelet[3279]: E0515 11:57:01.611367 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:01.611681 kubelet[3279]: W0515 11:57:01.611371 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:01.611681 kubelet[3279]: E0515 11:57:01.611375 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:01.611681 kubelet[3279]: E0515 11:57:01.611458 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:01.611681 kubelet[3279]: W0515 11:57:01.611462 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:01.611681 kubelet[3279]: E0515 11:57:01.611468 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:01.612560 kubelet[3279]: E0515 11:57:01.612537 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:01.612560 kubelet[3279]: W0515 11:57:01.612552 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:01.612560 kubelet[3279]: E0515 11:57:01.612562 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:01.621466 kubelet[3279]: E0515 11:57:01.619083 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:01.621466 kubelet[3279]: W0515 11:57:01.619099 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:01.621466 kubelet[3279]: E0515 11:57:01.619111 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:01.621573 containerd[1874]: time="2025-05-15T11:57:01.619679493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59fbcfc7f5-xg4r6,Uid:989213f0-576c-49e1-bd85-20d9ffa86519,Namespace:calico-system,Attempt:0,}" May 15 11:57:01.671188 containerd[1874]: time="2025-05-15T11:57:01.670828406Z" level=info msg="connecting to shim 308ddd940484e208cf6004c2d898c0e590da08fe15824ba06874d7b72fece791" address="unix:///run/containerd/s/c0b97c51575681a3de633c28482face87d4341fff9af6c56df40f16347244d0b" namespace=k8s.io protocol=ttrpc version=3 May 15 11:57:01.692561 systemd[1]: Started cri-containerd-308ddd940484e208cf6004c2d898c0e590da08fe15824ba06874d7b72fece791.scope - libcontainer container 308ddd940484e208cf6004c2d898c0e590da08fe15824ba06874d7b72fece791. May 15 11:57:01.696254 kubelet[3279]: E0515 11:57:01.696164 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:01.696254 kubelet[3279]: W0515 11:57:01.696217 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:01.696254 kubelet[3279]: E0515 11:57:01.696234 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:01.696675 kubelet[3279]: E0515 11:57:01.696655 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:01.696675 kubelet[3279]: W0515 11:57:01.696669 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:01.696937 kubelet[3279]: E0515 11:57:01.696708 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:01.697380 kubelet[3279]: E0515 11:57:01.696942 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:01.697380 kubelet[3279]: W0515 11:57:01.696952 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:01.697380 kubelet[3279]: E0515 11:57:01.696979 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:01.697380 kubelet[3279]: E0515 11:57:01.697103 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:01.697380 kubelet[3279]: W0515 11:57:01.697124 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:01.697380 kubelet[3279]: E0515 11:57:01.697132 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:01.697380 kubelet[3279]: E0515 11:57:01.697371 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:01.697380 kubelet[3279]: W0515 11:57:01.697382 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:01.697562 kubelet[3279]: E0515 11:57:01.697390 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:01.697562 kubelet[3279]: E0515 11:57:01.697536 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:01.697562 kubelet[3279]: W0515 11:57:01.697543 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:01.697562 kubelet[3279]: E0515 11:57:01.697551 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:01.698081 kubelet[3279]: E0515 11:57:01.697668 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:01.698081 kubelet[3279]: W0515 11:57:01.697677 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:01.698081 kubelet[3279]: E0515 11:57:01.697685 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:01.698081 kubelet[3279]: E0515 11:57:01.697800 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:01.698081 kubelet[3279]: W0515 11:57:01.697806 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:01.698081 kubelet[3279]: E0515 11:57:01.697812 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:01.698081 kubelet[3279]: E0515 11:57:01.697943 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:01.698081 kubelet[3279]: W0515 11:57:01.697951 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:01.698081 kubelet[3279]: E0515 11:57:01.697960 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:01.699005 kubelet[3279]: E0515 11:57:01.698904 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:01.699005 kubelet[3279]: W0515 11:57:01.698915 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:01.699005 kubelet[3279]: E0515 11:57:01.698965 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:01.699054 containerd[1874]: time="2025-05-15T11:57:01.698518750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qnppn,Uid:4ec052d2-3c37-4eb4-8e7c-fddf29cdf5d8,Namespace:calico-system,Attempt:0,}" May 15 11:57:01.699341 kubelet[3279]: E0515 11:57:01.699322 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:01.699341 kubelet[3279]: W0515 11:57:01.699336 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:01.699490 kubelet[3279]: E0515 11:57:01.699464 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:01.700096 kubelet[3279]: E0515 11:57:01.700056 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:01.700096 kubelet[3279]: W0515 11:57:01.700070 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:01.700177 kubelet[3279]: E0515 11:57:01.700118 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:01.700359 kubelet[3279]: E0515 11:57:01.700340 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:01.700359 kubelet[3279]: W0515 11:57:01.700353 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:01.700710 kubelet[3279]: E0515 11:57:01.700558 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:01.701061 kubelet[3279]: E0515 11:57:01.701034 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:01.701061 kubelet[3279]: W0515 11:57:01.701049 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:01.701142 kubelet[3279]: E0515 11:57:01.701096 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:01.701958 kubelet[3279]: E0515 11:57:01.701206 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:01.701958 kubelet[3279]: W0515 11:57:01.701216 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:01.701958 kubelet[3279]: E0515 11:57:01.701321 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:01.701958 kubelet[3279]: E0515 11:57:01.701689 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:01.701958 kubelet[3279]: W0515 11:57:01.701700 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:01.701958 kubelet[3279]: E0515 11:57:01.701791 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:01.702159 kubelet[3279]: E0515 11:57:01.702099 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:01.702159 kubelet[3279]: W0515 11:57:01.702110 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:01.702275 kubelet[3279]: E0515 11:57:01.702253 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:01.702773 kubelet[3279]: E0515 11:57:01.702592 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:01.702773 kubelet[3279]: W0515 11:57:01.702605 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:01.702773 kubelet[3279]: E0515 11:57:01.702733 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:01.703146 kubelet[3279]: E0515 11:57:01.703127 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:01.703146 kubelet[3279]: W0515 11:57:01.703141 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:01.703487 kubelet[3279]: E0515 11:57:01.703232 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:01.704163 kubelet[3279]: E0515 11:57:01.704141 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:01.704163 kubelet[3279]: W0515 11:57:01.704157 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:01.704421 kubelet[3279]: E0515 11:57:01.704394 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:01.704658 kubelet[3279]: E0515 11:57:01.704638 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:01.704658 kubelet[3279]: W0515 11:57:01.704652 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:01.704904 kubelet[3279]: E0515 11:57:01.704881 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:01.705495 kubelet[3279]: E0515 11:57:01.705473 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:01.705495 kubelet[3279]: W0515 11:57:01.705489 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:01.706014 kubelet[3279]: E0515 11:57:01.705638 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:01.706014 kubelet[3279]: E0515 11:57:01.705920 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:01.706014 kubelet[3279]: W0515 11:57:01.705931 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:01.706014 kubelet[3279]: E0515 11:57:01.705944 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:01.706297 kubelet[3279]: E0515 11:57:01.706276 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:01.706297 kubelet[3279]: W0515 11:57:01.706290 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:01.706389 kubelet[3279]: E0515 11:57:01.706307 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:01.706711 kubelet[3279]: E0515 11:57:01.706682 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:01.706711 kubelet[3279]: W0515 11:57:01.706696 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:01.706711 kubelet[3279]: E0515 11:57:01.706706 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:01.714113 kubelet[3279]: E0515 11:57:01.713840 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:01.714454 kubelet[3279]: W0515 11:57:01.714382 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:01.714625 kubelet[3279]: E0515 11:57:01.714581 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:01.744561 containerd[1874]: time="2025-05-15T11:57:01.744078685Z" level=info msg="connecting to shim 63d0c3d661a24e4ae6de3cdb141a6c7627bd4a0451b2d6c6e9706494a16a9bb3" address="unix:///run/containerd/s/ed3b9bd3ae8ed78745020a4eb141f3a9d0b75328bcc11804df5c72a5d7c524f8" namespace=k8s.io protocol=ttrpc version=3 May 15 11:57:01.746338 containerd[1874]: time="2025-05-15T11:57:01.746308268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59fbcfc7f5-xg4r6,Uid:989213f0-576c-49e1-bd85-20d9ffa86519,Namespace:calico-system,Attempt:0,} returns sandbox id \"308ddd940484e208cf6004c2d898c0e590da08fe15824ba06874d7b72fece791\"" May 15 11:57:01.750960 containerd[1874]: time="2025-05-15T11:57:01.750849796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 15 11:57:01.769558 systemd[1]: Started cri-containerd-63d0c3d661a24e4ae6de3cdb141a6c7627bd4a0451b2d6c6e9706494a16a9bb3.scope - libcontainer container 63d0c3d661a24e4ae6de3cdb141a6c7627bd4a0451b2d6c6e9706494a16a9bb3. May 15 11:57:01.796856 containerd[1874]: time="2025-05-15T11:57:01.796765435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qnppn,Uid:4ec052d2-3c37-4eb4-8e7c-fddf29cdf5d8,Namespace:calico-system,Attempt:0,} returns sandbox id \"63d0c3d661a24e4ae6de3cdb141a6c7627bd4a0451b2d6c6e9706494a16a9bb3\"" May 15 11:57:02.889905 kubelet[3279]: E0515 11:57:02.889867 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:02.890333 kubelet[3279]: W0515 11:57:02.889887 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:02.890333 kubelet[3279]: E0515 11:57:02.890238 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:02.890523 kubelet[3279]: E0515 11:57:02.890410 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:02.890575 kubelet[3279]: W0515 11:57:02.890419 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:02.890672 kubelet[3279]: E0515 11:57:02.890617 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:02.890845 kubelet[3279]: E0515 11:57:02.890833 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:02.890973 kubelet[3279]: W0515 11:57:02.890894 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:02.890973 kubelet[3279]: E0515 11:57:02.890906 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:02.891194 kubelet[3279]: E0515 11:57:02.891148 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:02.891194 kubelet[3279]: W0515 11:57:02.891159 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:02.891194 kubelet[3279]: E0515 11:57:02.891168 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:02.891449 kubelet[3279]: E0515 11:57:02.891421 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:02.891571 kubelet[3279]: W0515 11:57:02.891520 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:02.891571 kubelet[3279]: E0515 11:57:02.891535 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:02.891812 kubelet[3279]: E0515 11:57:02.891770 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:02.891812 kubelet[3279]: W0515 11:57:02.891780 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:02.891812 kubelet[3279]: E0515 11:57:02.891789 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:02.892020 kubelet[3279]: E0515 11:57:02.892008 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:02.892141 kubelet[3279]: W0515 11:57:02.892067 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:02.892141 kubelet[3279]: E0515 11:57:02.892080 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:02.892357 kubelet[3279]: E0515 11:57:02.892314 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:02.892357 kubelet[3279]: W0515 11:57:02.892324 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:02.892357 kubelet[3279]: E0515 11:57:02.892332 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:02.892584 kubelet[3279]: E0515 11:57:02.892561 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:02.892584 kubelet[3279]: W0515 11:57:02.892571 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:02.892707 kubelet[3279]: E0515 11:57:02.892669 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:02.892892 kubelet[3279]: E0515 11:57:02.892872 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:02.892892 kubelet[3279]: W0515 11:57:02.892883 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:02.893002 kubelet[3279]: E0515 11:57:02.892966 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:02.893197 kubelet[3279]: E0515 11:57:02.893149 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:02.893197 kubelet[3279]: W0515 11:57:02.893158 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:02.893197 kubelet[3279]: E0515 11:57:02.893166 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:02.893454 kubelet[3279]: E0515 11:57:02.893396 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:02.893454 kubelet[3279]: W0515 11:57:02.893407 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:02.893454 kubelet[3279]: E0515 11:57:02.893419 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:02.893704 kubelet[3279]: E0515 11:57:02.893662 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:02.893704 kubelet[3279]: W0515 11:57:02.893672 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:02.893704 kubelet[3279]: E0515 11:57:02.893681 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:02.893959 kubelet[3279]: E0515 11:57:02.893910 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:02.893959 kubelet[3279]: W0515 11:57:02.893919 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:02.893959 kubelet[3279]: E0515 11:57:02.893927 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:02.894199 kubelet[3279]: E0515 11:57:02.894154 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:02.894199 kubelet[3279]: W0515 11:57:02.894164 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:02.894199 kubelet[3279]: E0515 11:57:02.894172 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:02.894478 kubelet[3279]: E0515 11:57:02.894402 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:02.894478 kubelet[3279]: W0515 11:57:02.894413 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:02.894478 kubelet[3279]: E0515 11:57:02.894422 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:02.894727 kubelet[3279]: E0515 11:57:02.894708 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:02.894831 kubelet[3279]: W0515 11:57:02.894718 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:02.894831 kubelet[3279]: E0515 11:57:02.894796 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:02.895057 kubelet[3279]: E0515 11:57:02.895012 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:02.895057 kubelet[3279]: W0515 11:57:02.895022 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:02.895057 kubelet[3279]: E0515 11:57:02.895030 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:02.895253 kubelet[3279]: E0515 11:57:02.895243 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:02.895384 kubelet[3279]: W0515 11:57:02.895323 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:02.895384 kubelet[3279]: E0515 11:57:02.895336 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:02.895623 kubelet[3279]: E0515 11:57:02.895578 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:02.895623 kubelet[3279]: W0515 11:57:02.895589 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:02.895623 kubelet[3279]: E0515 11:57:02.895598 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:02.895888 kubelet[3279]: E0515 11:57:02.895840 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:02.895888 kubelet[3279]: W0515 11:57:02.895849 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:02.895888 kubelet[3279]: E0515 11:57:02.895857 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:02.896127 kubelet[3279]: E0515 11:57:02.896101 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:02.896127 kubelet[3279]: W0515 11:57:02.896110 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:02.896246 kubelet[3279]: E0515 11:57:02.896207 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:02.896433 kubelet[3279]: E0515 11:57:02.896422 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:02.896433 kubelet[3279]: W0515 11:57:02.896482 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:02.896433 kubelet[3279]: E0515 11:57:02.896496 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:02.896754 kubelet[3279]: E0515 11:57:02.896725 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:02.896754 kubelet[3279]: W0515 11:57:02.896735 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:02.896889 kubelet[3279]: E0515 11:57:02.896811 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:02.896981 kubelet[3279]: E0515 11:57:02.896973 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:02.897031 kubelet[3279]: W0515 11:57:02.897022 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:02.897087 kubelet[3279]: E0515 11:57:02.897075 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:03.623051 kubelet[3279]: E0515 11:57:03.623006 3279 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rvch9" podUID="db9a440f-784b-44cc-94cc-545a3468a789" May 15 11:57:03.954076 containerd[1874]: time="2025-05-15T11:57:03.953575998Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:57:03.956080 containerd[1874]: time="2025-05-15T11:57:03.956059352Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 15 11:57:03.960333 containerd[1874]: time="2025-05-15T11:57:03.960297995Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:57:03.963927 containerd[1874]: time="2025-05-15T11:57:03.963882743Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:57:03.964789 containerd[1874]: time="2025-05-15T11:57:03.964703298Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 2.213822846s" May 15 11:57:03.964789 containerd[1874]: time="2025-05-15T11:57:03.964728250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 15 11:57:03.965663 containerd[1874]: time="2025-05-15T11:57:03.965519029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 15 11:57:03.974548 containerd[1874]: time="2025-05-15T11:57:03.973987811Z" level=info msg="CreateContainer within sandbox \"308ddd940484e208cf6004c2d898c0e590da08fe15824ba06874d7b72fece791\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 15 11:57:03.991907 containerd[1874]: time="2025-05-15T11:57:03.991884925Z" level=info msg="Container ae658401ee7fdd5a8cbcf275d4849e2a885ffd9a000eedf2e48f8b6db7ccca70: CDI devices from CRI Config.CDIDevices: []" May 15 11:57:04.006201 containerd[1874]: time="2025-05-15T11:57:04.006171971Z" level=info msg="CreateContainer within sandbox \"308ddd940484e208cf6004c2d898c0e590da08fe15824ba06874d7b72fece791\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ae658401ee7fdd5a8cbcf275d4849e2a885ffd9a000eedf2e48f8b6db7ccca70\"" May 15 11:57:04.007072 containerd[1874]: time="2025-05-15T11:57:04.007025879Z" level=info msg="StartContainer for \"ae658401ee7fdd5a8cbcf275d4849e2a885ffd9a000eedf2e48f8b6db7ccca70\"" May 15 11:57:04.008227 containerd[1874]: time="2025-05-15T11:57:04.008202682Z" level=info msg="connecting to shim ae658401ee7fdd5a8cbcf275d4849e2a885ffd9a000eedf2e48f8b6db7ccca70" address="unix:///run/containerd/s/c0b97c51575681a3de633c28482face87d4341fff9af6c56df40f16347244d0b" protocol=ttrpc version=3 May 15 11:57:04.026655 systemd[1]: Started cri-containerd-ae658401ee7fdd5a8cbcf275d4849e2a885ffd9a000eedf2e48f8b6db7ccca70.scope - libcontainer container ae658401ee7fdd5a8cbcf275d4849e2a885ffd9a000eedf2e48f8b6db7ccca70. May 15 11:57:04.055200 containerd[1874]: time="2025-05-15T11:57:04.055166388Z" level=info msg="StartContainer for \"ae658401ee7fdd5a8cbcf275d4849e2a885ffd9a000eedf2e48f8b6db7ccca70\" returns successfully" May 15 11:57:04.692151 kubelet[3279]: I0515 11:57:04.691675 3279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-59fbcfc7f5-xg4r6" podStartSLOduration=1.475390301 podStartE2EDuration="3.691661902s" podCreationTimestamp="2025-05-15 11:57:01 +0000 UTC" firstStartedPulling="2025-05-15 11:57:01.748961181 +0000 UTC m=+13.196791591" lastFinishedPulling="2025-05-15 11:57:03.96523279 +0000 UTC m=+15.413063192" observedRunningTime="2025-05-15 11:57:04.691575812 +0000 UTC m=+16.139406214" watchObservedRunningTime="2025-05-15 11:57:04.691661902 +0000 UTC m=+16.139492304" May 15 11:57:04.706972 kubelet[3279]: E0515 11:57:04.706911 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:04.706972 kubelet[3279]: W0515 11:57:04.706926 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:04.706972 kubelet[3279]: E0515 11:57:04.706940 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:04.707146 kubelet[3279]: E0515 11:57:04.707034 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:04.707146 kubelet[3279]: W0515 11:57:04.707039 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:04.707146 kubelet[3279]: E0515 11:57:04.707045 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:04.707146 kubelet[3279]: E0515 11:57:04.707138 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:04.707146 kubelet[3279]: W0515 11:57:04.707143 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:04.707146 kubelet[3279]: E0515 11:57:04.707148 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:04.707294 kubelet[3279]: E0515 11:57:04.707288 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:04.707311 kubelet[3279]: W0515 11:57:04.707296 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:04.707311 kubelet[3279]: E0515 11:57:04.707305 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:04.707411 kubelet[3279]: E0515 11:57:04.707402 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:04.707411 kubelet[3279]: W0515 11:57:04.707409 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:04.707487 kubelet[3279]: E0515 11:57:04.707415 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:04.707531 kubelet[3279]: E0515 11:57:04.707519 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:04.707531 kubelet[3279]: W0515 11:57:04.707527 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:04.707586 kubelet[3279]: E0515 11:57:04.707533 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:04.707628 kubelet[3279]: E0515 11:57:04.707608 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:04.707628 kubelet[3279]: W0515 11:57:04.707612 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:04.707628 kubelet[3279]: E0515 11:57:04.707619 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:04.707709 kubelet[3279]: E0515 11:57:04.707689 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:04.707709 kubelet[3279]: W0515 11:57:04.707693 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:04.707709 kubelet[3279]: E0515 11:57:04.707698 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:04.707782 kubelet[3279]: E0515 11:57:04.707772 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:04.707782 kubelet[3279]: W0515 11:57:04.707776 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:04.707782 kubelet[3279]: E0515 11:57:04.707781 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:04.707862 kubelet[3279]: E0515 11:57:04.707845 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:04.707862 kubelet[3279]: W0515 11:57:04.707848 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:04.707862 kubelet[3279]: E0515 11:57:04.707852 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:04.707933 kubelet[3279]: E0515 11:57:04.707914 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:04.707933 kubelet[3279]: W0515 11:57:04.707918 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:04.707933 kubelet[3279]: E0515 11:57:04.707923 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:04.708010 kubelet[3279]: E0515 11:57:04.707993 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:04.708010 kubelet[3279]: W0515 11:57:04.707997 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:04.708010 kubelet[3279]: E0515 11:57:04.708002 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:04.708087 kubelet[3279]: E0515 11:57:04.708072 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:04.708087 kubelet[3279]: W0515 11:57:04.708076 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:04.708087 kubelet[3279]: E0515 11:57:04.708080 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:04.708163 kubelet[3279]: E0515 11:57:04.708146 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:04.708163 kubelet[3279]: W0515 11:57:04.708150 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:04.708163 kubelet[3279]: E0515 11:57:04.708154 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:04.708233 kubelet[3279]: E0515 11:57:04.708215 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:04.708233 kubelet[3279]: W0515 11:57:04.708219 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:04.708233 kubelet[3279]: E0515 11:57:04.708223 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:04.717568 kubelet[3279]: E0515 11:57:04.717553 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:04.717762 kubelet[3279]: W0515 11:57:04.717641 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:04.717762 kubelet[3279]: E0515 11:57:04.717657 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:04.717996 kubelet[3279]: E0515 11:57:04.717901 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:04.717996 kubelet[3279]: W0515 11:57:04.717911 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:04.717996 kubelet[3279]: E0515 11:57:04.717925 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:04.718121 kubelet[3279]: E0515 11:57:04.718112 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:04.718272 kubelet[3279]: W0515 11:57:04.718154 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:04.718272 kubelet[3279]: E0515 11:57:04.718175 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:04.718383 kubelet[3279]: E0515 11:57:04.718374 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:04.718431 kubelet[3279]: W0515 11:57:04.718422 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:04.718497 kubelet[3279]: E0515 11:57:04.718487 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:04.718680 kubelet[3279]: E0515 11:57:04.718647 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:04.718680 kubelet[3279]: W0515 11:57:04.718655 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:04.718680 kubelet[3279]: E0515 11:57:04.718668 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:04.718801 kubelet[3279]: E0515 11:57:04.718784 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:04.718801 kubelet[3279]: W0515 11:57:04.718798 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:04.718842 kubelet[3279]: E0515 11:57:04.718811 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:04.718912 kubelet[3279]: E0515 11:57:04.718899 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:04.718912 kubelet[3279]: W0515 11:57:04.718908 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:04.718991 kubelet[3279]: E0515 11:57:04.718914 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:04.718991 kubelet[3279]: E0515 11:57:04.718989 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:04.719022 kubelet[3279]: W0515 11:57:04.718994 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:04.719022 kubelet[3279]: E0515 11:57:04.719001 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:04.719102 kubelet[3279]: E0515 11:57:04.719092 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:04.719121 kubelet[3279]: W0515 11:57:04.719106 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:04.719121 kubelet[3279]: E0515 11:57:04.719112 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:04.719240 kubelet[3279]: E0515 11:57:04.719227 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:04.719240 kubelet[3279]: W0515 11:57:04.719236 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:04.719281 kubelet[3279]: E0515 11:57:04.719242 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:04.719424 kubelet[3279]: E0515 11:57:04.719412 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:04.719424 kubelet[3279]: W0515 11:57:04.719422 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:04.719472 kubelet[3279]: E0515 11:57:04.719428 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:04.719536 kubelet[3279]: E0515 11:57:04.719525 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:04.719536 kubelet[3279]: W0515 11:57:04.719534 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:04.719565 kubelet[3279]: E0515 11:57:04.719540 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:04.719708 kubelet[3279]: E0515 11:57:04.719694 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:04.719708 kubelet[3279]: W0515 11:57:04.719704 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:04.719757 kubelet[3279]: E0515 11:57:04.719714 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:04.719897 kubelet[3279]: E0515 11:57:04.719886 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:04.719897 kubelet[3279]: W0515 11:57:04.719894 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:04.719974 kubelet[3279]: E0515 11:57:04.719963 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:04.720019 kubelet[3279]: E0515 11:57:04.720008 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:04.720019 kubelet[3279]: W0515 11:57:04.720016 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:04.720104 kubelet[3279]: E0515 11:57:04.720034 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:04.720141 kubelet[3279]: E0515 11:57:04.720106 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:04.720141 kubelet[3279]: W0515 11:57:04.720111 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:04.720141 kubelet[3279]: E0515 11:57:04.720120 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:04.720222 kubelet[3279]: E0515 11:57:04.720211 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:04.720222 kubelet[3279]: W0515 11:57:04.720217 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:04.720222 kubelet[3279]: E0515 11:57:04.720223 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:04.720830 kubelet[3279]: E0515 11:57:04.720815 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 11:57:04.720830 kubelet[3279]: W0515 11:57:04.720827 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 11:57:04.720879 kubelet[3279]: E0515 11:57:04.720836 3279 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 11:57:05.462726 containerd[1874]: time="2025-05-15T11:57:05.462672487Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:57:05.465796 containerd[1874]: time="2025-05-15T11:57:05.465764255Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 15 11:57:05.470215 containerd[1874]: time="2025-05-15T11:57:05.470170542Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:57:05.475590 containerd[1874]: time="2025-05-15T11:57:05.475545596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:57:05.475924 containerd[1874]: time="2025-05-15T11:57:05.475808378Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.510268301s" May 15 11:57:05.475924 containerd[1874]: time="2025-05-15T11:57:05.475837107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 15 11:57:05.478294 containerd[1874]: time="2025-05-15T11:57:05.477909299Z" level=info msg="CreateContainer within sandbox \"63d0c3d661a24e4ae6de3cdb141a6c7627bd4a0451b2d6c6e9706494a16a9bb3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 15 11:57:05.503285 containerd[1874]: time="2025-05-15T11:57:05.503259820Z" level=info msg="Container 7eafd237a9835c5a8279879db000ff5a2e9cab6b5afe4ed5607983715f29dba0: CDI devices from CRI Config.CDIDevices: []" May 15 11:57:05.518980 containerd[1874]: time="2025-05-15T11:57:05.518952810Z" level=info msg="CreateContainer within sandbox \"63d0c3d661a24e4ae6de3cdb141a6c7627bd4a0451b2d6c6e9706494a16a9bb3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7eafd237a9835c5a8279879db000ff5a2e9cab6b5afe4ed5607983715f29dba0\"" May 15 11:57:05.519817 containerd[1874]: time="2025-05-15T11:57:05.519760741Z" level=info msg="StartContainer for \"7eafd237a9835c5a8279879db000ff5a2e9cab6b5afe4ed5607983715f29dba0\"" May 15 11:57:05.521202 containerd[1874]: time="2025-05-15T11:57:05.521179438Z" level=info msg="connecting to shim 7eafd237a9835c5a8279879db000ff5a2e9cab6b5afe4ed5607983715f29dba0" address="unix:///run/containerd/s/ed3b9bd3ae8ed78745020a4eb141f3a9d0b75328bcc11804df5c72a5d7c524f8" protocol=ttrpc version=3 May 15 11:57:05.538561 systemd[1]: Started cri-containerd-7eafd237a9835c5a8279879db000ff5a2e9cab6b5afe4ed5607983715f29dba0.scope - libcontainer container 7eafd237a9835c5a8279879db000ff5a2e9cab6b5afe4ed5607983715f29dba0. May 15 11:57:05.574332 containerd[1874]: time="2025-05-15T11:57:05.574205253Z" level=info msg="StartContainer for \"7eafd237a9835c5a8279879db000ff5a2e9cab6b5afe4ed5607983715f29dba0\" returns successfully" May 15 11:57:05.581141 systemd[1]: cri-containerd-7eafd237a9835c5a8279879db000ff5a2e9cab6b5afe4ed5607983715f29dba0.scope: Deactivated successfully. May 15 11:57:05.584957 containerd[1874]: time="2025-05-15T11:57:05.584925576Z" level=info msg="received exit event container_id:\"7eafd237a9835c5a8279879db000ff5a2e9cab6b5afe4ed5607983715f29dba0\" id:\"7eafd237a9835c5a8279879db000ff5a2e9cab6b5afe4ed5607983715f29dba0\" pid:4040 exited_at:{seconds:1747310225 nanos:584346986}" May 15 11:57:05.585280 containerd[1874]: time="2025-05-15T11:57:05.584988593Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7eafd237a9835c5a8279879db000ff5a2e9cab6b5afe4ed5607983715f29dba0\" id:\"7eafd237a9835c5a8279879db000ff5a2e9cab6b5afe4ed5607983715f29dba0\" pid:4040 exited_at:{seconds:1747310225 nanos:584346986}" May 15 11:57:05.605006 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7eafd237a9835c5a8279879db000ff5a2e9cab6b5afe4ed5607983715f29dba0-rootfs.mount: Deactivated successfully. May 15 11:57:05.623295 kubelet[3279]: E0515 11:57:05.623129 3279 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rvch9" podUID="db9a440f-784b-44cc-94cc-545a3468a789" May 15 11:57:05.685781 kubelet[3279]: I0515 11:57:05.685741 3279 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 11:57:06.846831 kubelet[3279]: I0515 11:57:06.846535 3279 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 11:57:07.622567 kubelet[3279]: E0515 11:57:07.622502 3279 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rvch9" podUID="db9a440f-784b-44cc-94cc-545a3468a789" May 15 11:57:07.691313 containerd[1874]: time="2025-05-15T11:57:07.691277895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 15 11:57:09.623221 kubelet[3279]: E0515 11:57:09.623175 3279 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rvch9" podUID="db9a440f-784b-44cc-94cc-545a3468a789" May 15 11:57:11.458297 containerd[1874]: time="2025-05-15T11:57:11.458249315Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:57:11.470649 containerd[1874]: time="2025-05-15T11:57:11.470611565Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 15 11:57:11.476370 containerd[1874]: time="2025-05-15T11:57:11.476306726Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:57:11.479254 containerd[1874]: time="2025-05-15T11:57:11.479199056Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:57:11.479814 containerd[1874]: time="2025-05-15T11:57:11.479666995Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 3.788264922s" May 15 11:57:11.479814 containerd[1874]: time="2025-05-15T11:57:11.479689564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 15 11:57:11.482411 containerd[1874]: time="2025-05-15T11:57:11.482377032Z" level=info msg="CreateContainer within sandbox \"63d0c3d661a24e4ae6de3cdb141a6c7627bd4a0451b2d6c6e9706494a16a9bb3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 15 11:57:11.504010 containerd[1874]: time="2025-05-15T11:57:11.503978094Z" level=info msg="Container 24909d25784eadc8c12b030e0b0ba32dca1416224ea89ea0da677eb1a4e52ee9: CDI devices from CRI Config.CDIDevices: []" May 15 11:57:11.517414 containerd[1874]: time="2025-05-15T11:57:11.517385435Z" level=info msg="CreateContainer within sandbox \"63d0c3d661a24e4ae6de3cdb141a6c7627bd4a0451b2d6c6e9706494a16a9bb3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"24909d25784eadc8c12b030e0b0ba32dca1416224ea89ea0da677eb1a4e52ee9\"" May 15 11:57:11.518018 containerd[1874]: time="2025-05-15T11:57:11.517866887Z" level=info msg="StartContainer for \"24909d25784eadc8c12b030e0b0ba32dca1416224ea89ea0da677eb1a4e52ee9\"" May 15 11:57:11.519082 containerd[1874]: time="2025-05-15T11:57:11.519011140Z" level=info msg="connecting to shim 24909d25784eadc8c12b030e0b0ba32dca1416224ea89ea0da677eb1a4e52ee9" address="unix:///run/containerd/s/ed3b9bd3ae8ed78745020a4eb141f3a9d0b75328bcc11804df5c72a5d7c524f8" protocol=ttrpc version=3 May 15 11:57:11.539541 systemd[1]: Started cri-containerd-24909d25784eadc8c12b030e0b0ba32dca1416224ea89ea0da677eb1a4e52ee9.scope - libcontainer container 24909d25784eadc8c12b030e0b0ba32dca1416224ea89ea0da677eb1a4e52ee9. May 15 11:57:11.565236 containerd[1874]: time="2025-05-15T11:57:11.565205204Z" level=info msg="StartContainer for \"24909d25784eadc8c12b030e0b0ba32dca1416224ea89ea0da677eb1a4e52ee9\" returns successfully" May 15 11:57:11.623054 kubelet[3279]: E0515 11:57:11.622992 3279 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rvch9" podUID="db9a440f-784b-44cc-94cc-545a3468a789" May 15 11:57:12.790902 containerd[1874]: time="2025-05-15T11:57:12.790625148Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 11:57:12.792635 systemd[1]: cri-containerd-24909d25784eadc8c12b030e0b0ba32dca1416224ea89ea0da677eb1a4e52ee9.scope: Deactivated successfully. May 15 11:57:12.793472 systemd[1]: cri-containerd-24909d25784eadc8c12b030e0b0ba32dca1416224ea89ea0da677eb1a4e52ee9.scope: Consumed 284ms CPU time, 169M memory peak, 150.3M written to disk. May 15 11:57:12.795230 containerd[1874]: time="2025-05-15T11:57:12.795207960Z" level=info msg="received exit event container_id:\"24909d25784eadc8c12b030e0b0ba32dca1416224ea89ea0da677eb1a4e52ee9\" id:\"24909d25784eadc8c12b030e0b0ba32dca1416224ea89ea0da677eb1a4e52ee9\" pid:4100 exited_at:{seconds:1747310232 nanos:795057116}" May 15 11:57:12.795930 containerd[1874]: time="2025-05-15T11:57:12.795905642Z" level=info msg="TaskExit event in podsandbox handler container_id:\"24909d25784eadc8c12b030e0b0ba32dca1416224ea89ea0da677eb1a4e52ee9\" id:\"24909d25784eadc8c12b030e0b0ba32dca1416224ea89ea0da677eb1a4e52ee9\" pid:4100 exited_at:{seconds:1747310232 nanos:795057116}" May 15 11:57:12.809572 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24909d25784eadc8c12b030e0b0ba32dca1416224ea89ea0da677eb1a4e52ee9-rootfs.mount: Deactivated successfully. May 15 11:57:12.860492 kubelet[3279]: I0515 11:57:12.860462 3279 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 15 11:57:13.337807 kubelet[3279]: W0515 11:57:12.903839 3279 reflector.go:569] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4334.0.0-a-59732b8df3" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4334.0.0-a-59732b8df3' and this object May 15 11:57:13.337807 kubelet[3279]: E0515 11:57:12.903868 3279 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ci-4334.0.0-a-59732b8df3\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4334.0.0-a-59732b8df3' and this object" logger="UnhandledError" May 15 11:57:13.337807 kubelet[3279]: W0515 11:57:12.903902 3279 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4334.0.0-a-59732b8df3" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4334.0.0-a-59732b8df3' and this object May 15 11:57:13.337807 kubelet[3279]: E0515 11:57:12.903910 3279 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4334.0.0-a-59732b8df3\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4334.0.0-a-59732b8df3' and this object" logger="UnhandledError" May 15 11:57:13.337807 kubelet[3279]: W0515 11:57:12.904501 3279 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4334.0.0-a-59732b8df3" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4334.0.0-a-59732b8df3' and this object May 15 11:57:12.894105 systemd[1]: Created slice kubepods-besteffort-pod69a43828_ae35_4eb0_a0e4_2f4a0dfd7a47.slice - libcontainer container kubepods-besteffort-pod69a43828_ae35_4eb0_a0e4_2f4a0dfd7a47.slice. May 15 11:57:13.338040 kubelet[3279]: E0515 11:57:12.904521 3279 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4334.0.0-a-59732b8df3\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4334.0.0-a-59732b8df3' and this object" logger="UnhandledError" May 15 11:57:13.338040 kubelet[3279]: I0515 11:57:12.973351 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69a43828-ae35-4eb0-a0e4-2f4a0dfd7a47-tigera-ca-bundle\") pod \"calico-kube-controllers-96dfc677d-wk2vl\" (UID: \"69a43828-ae35-4eb0-a0e4-2f4a0dfd7a47\") " pod="calico-system/calico-kube-controllers-96dfc677d-wk2vl" May 15 11:57:13.338040 kubelet[3279]: I0515 11:57:12.973382 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c3a54e6c-f5f4-4045-86f8-f2a82e8f4c28-calico-apiserver-certs\") pod \"calico-apiserver-5ddd4595cc-xmn59\" (UID: \"c3a54e6c-f5f4-4045-86f8-f2a82e8f4c28\") " pod="calico-apiserver/calico-apiserver-5ddd4595cc-xmn59" May 15 11:57:13.338040 kubelet[3279]: I0515 11:57:12.973393 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvw76\" (UniqueName: \"kubernetes.io/projected/3a542d65-998c-4e93-9f30-3a517d0acb36-kube-api-access-jvw76\") pod \"calico-apiserver-5ddd4595cc-xq587\" (UID: \"3a542d65-998c-4e93-9f30-3a517d0acb36\") " pod="calico-apiserver/calico-apiserver-5ddd4595cc-xq587" May 15 11:57:13.338040 kubelet[3279]: I0515 11:57:12.973405 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9f8b73d-9f99-48d0-821f-bd88c2524d23-config-volume\") pod \"coredns-668d6bf9bc-7rpbh\" (UID: \"b9f8b73d-9f99-48d0-821f-bd88c2524d23\") " pod="kube-system/coredns-668d6bf9bc-7rpbh" May 15 11:57:12.901334 systemd[1]: Created slice kubepods-burstable-podfbb91975_1b08_4ee7_8250_0542d4c5fd5b.slice - libcontainer container kubepods-burstable-podfbb91975_1b08_4ee7_8250_0542d4c5fd5b.slice. May 15 11:57:13.338152 kubelet[3279]: I0515 11:57:12.973483 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m4zc\" (UniqueName: \"kubernetes.io/projected/fbb91975-1b08-4ee7-8250-0542d4c5fd5b-kube-api-access-7m4zc\") pod \"coredns-668d6bf9bc-jc2wm\" (UID: \"fbb91975-1b08-4ee7-8250-0542d4c5fd5b\") " pod="kube-system/coredns-668d6bf9bc-jc2wm" May 15 11:57:13.338152 kubelet[3279]: I0515 11:57:12.973522 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8cln\" (UniqueName: \"kubernetes.io/projected/b9f8b73d-9f99-48d0-821f-bd88c2524d23-kube-api-access-z8cln\") pod \"coredns-668d6bf9bc-7rpbh\" (UID: \"b9f8b73d-9f99-48d0-821f-bd88c2524d23\") " pod="kube-system/coredns-668d6bf9bc-7rpbh" May 15 11:57:13.338152 kubelet[3279]: I0515 11:57:12.973537 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8jfx\" (UniqueName: \"kubernetes.io/projected/c3a54e6c-f5f4-4045-86f8-f2a82e8f4c28-kube-api-access-t8jfx\") pod \"calico-apiserver-5ddd4595cc-xmn59\" (UID: \"c3a54e6c-f5f4-4045-86f8-f2a82e8f4c28\") " pod="calico-apiserver/calico-apiserver-5ddd4595cc-xmn59" May 15 11:57:13.338152 kubelet[3279]: I0515 11:57:12.973549 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkcsx\" (UniqueName: \"kubernetes.io/projected/69a43828-ae35-4eb0-a0e4-2f4a0dfd7a47-kube-api-access-bkcsx\") pod \"calico-kube-controllers-96dfc677d-wk2vl\" (UID: \"69a43828-ae35-4eb0-a0e4-2f4a0dfd7a47\") " pod="calico-system/calico-kube-controllers-96dfc677d-wk2vl" May 15 11:57:13.338152 kubelet[3279]: I0515 11:57:12.973562 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3a542d65-998c-4e93-9f30-3a517d0acb36-calico-apiserver-certs\") pod \"calico-apiserver-5ddd4595cc-xq587\" (UID: \"3a542d65-998c-4e93-9f30-3a517d0acb36\") " pod="calico-apiserver/calico-apiserver-5ddd4595cc-xq587" May 15 11:57:12.908074 systemd[1]: Created slice kubepods-besteffort-pod3a542d65_998c_4e93_9f30_3a517d0acb36.slice - libcontainer container kubepods-besteffort-pod3a542d65_998c_4e93_9f30_3a517d0acb36.slice. May 15 11:57:13.338256 kubelet[3279]: I0515 11:57:12.973588 3279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbb91975-1b08-4ee7-8250-0542d4c5fd5b-config-volume\") pod \"coredns-668d6bf9bc-jc2wm\" (UID: \"fbb91975-1b08-4ee7-8250-0542d4c5fd5b\") " pod="kube-system/coredns-668d6bf9bc-jc2wm" May 15 11:57:12.914081 systemd[1]: Created slice kubepods-burstable-podb9f8b73d_9f99_48d0_821f_bd88c2524d23.slice - libcontainer container kubepods-burstable-podb9f8b73d_9f99_48d0_821f_bd88c2524d23.slice. May 15 11:57:12.920146 systemd[1]: Created slice kubepods-besteffort-podc3a54e6c_f5f4_4045_86f8_f2a82e8f4c28.slice - libcontainer container kubepods-besteffort-podc3a54e6c_f5f4_4045_86f8_f2a82e8f4c28.slice. May 15 11:57:13.628732 systemd[1]: Created slice kubepods-besteffort-poddb9a440f_784b_44cc_94cc_545a3468a789.slice - libcontainer container kubepods-besteffort-poddb9a440f_784b_44cc_94cc_545a3468a789.slice. May 15 11:57:13.630587 containerd[1874]: time="2025-05-15T11:57:13.630492219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rvch9,Uid:db9a440f-784b-44cc-94cc-545a3468a789,Namespace:calico-system,Attempt:0,}" May 15 11:57:13.641618 containerd[1874]: time="2025-05-15T11:57:13.641559620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-96dfc677d-wk2vl,Uid:69a43828-ae35-4eb0-a0e4-2f4a0dfd7a47,Namespace:calico-system,Attempt:0,}" May 15 11:57:14.075461 kubelet[3279]: E0515 11:57:14.075305 3279 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition May 15 11:57:14.075461 kubelet[3279]: E0515 11:57:14.075389 3279 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b9f8b73d-9f99-48d0-821f-bd88c2524d23-config-volume podName:b9f8b73d-9f99-48d0-821f-bd88c2524d23 nodeName:}" failed. No retries permitted until 2025-05-15 11:57:14.575370329 +0000 UTC m=+26.023200739 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b9f8b73d-9f99-48d0-821f-bd88c2524d23-config-volume") pod "coredns-668d6bf9bc-7rpbh" (UID: "b9f8b73d-9f99-48d0-821f-bd88c2524d23") : failed to sync configmap cache: timed out waiting for the condition May 15 11:57:14.075835 kubelet[3279]: E0515 11:57:14.075608 3279 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition May 15 11:57:14.075835 kubelet[3279]: E0515 11:57:14.075633 3279 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fbb91975-1b08-4ee7-8250-0542d4c5fd5b-config-volume podName:fbb91975-1b08-4ee7-8250-0542d4c5fd5b nodeName:}" failed. No retries permitted until 2025-05-15 11:57:14.575625312 +0000 UTC m=+26.023455714 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fbb91975-1b08-4ee7-8250-0542d4c5fd5b-config-volume") pod "coredns-668d6bf9bc-jc2wm" (UID: "fbb91975-1b08-4ee7-8250-0542d4c5fd5b") : failed to sync configmap cache: timed out waiting for the condition May 15 11:57:14.075835 kubelet[3279]: E0515 11:57:14.075650 3279 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition May 15 11:57:14.075835 kubelet[3279]: E0515 11:57:14.075668 3279 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a542d65-998c-4e93-9f30-3a517d0acb36-calico-apiserver-certs podName:3a542d65-998c-4e93-9f30-3a517d0acb36 nodeName:}" failed. No retries permitted until 2025-05-15 11:57:14.575664105 +0000 UTC m=+26.023494515 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/3a542d65-998c-4e93-9f30-3a517d0acb36-calico-apiserver-certs") pod "calico-apiserver-5ddd4595cc-xq587" (UID: "3a542d65-998c-4e93-9f30-3a517d0acb36") : failed to sync secret cache: timed out waiting for the condition May 15 11:57:14.075835 kubelet[3279]: E0515 11:57:14.075689 3279 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition May 15 11:57:14.075939 kubelet[3279]: E0515 11:57:14.075706 3279 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3a54e6c-f5f4-4045-86f8-f2a82e8f4c28-calico-apiserver-certs podName:c3a54e6c-f5f4-4045-86f8-f2a82e8f4c28 nodeName:}" failed. No retries permitted until 2025-05-15 11:57:14.575701482 +0000 UTC m=+26.023531884 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/c3a54e6c-f5f4-4045-86f8-f2a82e8f4c28-calico-apiserver-certs") pod "calico-apiserver-5ddd4595cc-xmn59" (UID: "c3a54e6c-f5f4-4045-86f8-f2a82e8f4c28") : failed to sync secret cache: timed out waiting for the condition May 15 11:57:14.079840 kubelet[3279]: E0515 11:57:14.079821 3279 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition May 15 11:57:14.079909 kubelet[3279]: E0515 11:57:14.079862 3279 projected.go:194] Error preparing data for projected volume kube-api-access-jvw76 for pod calico-apiserver/calico-apiserver-5ddd4595cc-xq587: failed to sync configmap cache: timed out waiting for the condition May 15 11:57:14.079909 kubelet[3279]: E0515 11:57:14.079896 3279 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3a542d65-998c-4e93-9f30-3a517d0acb36-kube-api-access-jvw76 podName:3a542d65-998c-4e93-9f30-3a517d0acb36 nodeName:}" failed. No retries permitted until 2025-05-15 11:57:14.57988838 +0000 UTC m=+26.027718790 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jvw76" (UniqueName: "kubernetes.io/projected/3a542d65-998c-4e93-9f30-3a517d0acb36-kube-api-access-jvw76") pod "calico-apiserver-5ddd4595cc-xq587" (UID: "3a542d65-998c-4e93-9f30-3a517d0acb36") : failed to sync configmap cache: timed out waiting for the condition May 15 11:57:14.083808 kubelet[3279]: E0515 11:57:14.083751 3279 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition May 15 11:57:14.083808 kubelet[3279]: E0515 11:57:14.083773 3279 projected.go:194] Error preparing data for projected volume kube-api-access-t8jfx for pod calico-apiserver/calico-apiserver-5ddd4595cc-xmn59: failed to sync configmap cache: timed out waiting for the condition May 15 11:57:14.083899 kubelet[3279]: E0515 11:57:14.083818 3279 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c3a54e6c-f5f4-4045-86f8-f2a82e8f4c28-kube-api-access-t8jfx podName:c3a54e6c-f5f4-4045-86f8-f2a82e8f4c28 nodeName:}" failed. No retries permitted until 2025-05-15 11:57:14.583796128 +0000 UTC m=+26.031626538 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t8jfx" (UniqueName: "kubernetes.io/projected/c3a54e6c-f5f4-4045-86f8-f2a82e8f4c28-kube-api-access-t8jfx") pod "calico-apiserver-5ddd4595cc-xmn59" (UID: "c3a54e6c-f5f4-4045-86f8-f2a82e8f4c28") : failed to sync configmap cache: timed out waiting for the condition May 15 11:57:14.647518 containerd[1874]: time="2025-05-15T11:57:14.645892277Z" level=error msg="Failed to destroy network for sandbox \"f70618bcd2e8f43520bce7f59128d67f265b2fabed2b4cef4c5c98d7fda41d25\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 11:57:14.647095 systemd[1]: run-netns-cni\x2d4515e0f1\x2df926\x2d076c\x2d9058\x2d29472339ed56.mount: Deactivated successfully. May 15 11:57:14.648240 containerd[1874]: time="2025-05-15T11:57:14.648143102Z" level=error msg="Failed to destroy network for sandbox \"04ad6413d458a981e97876bbc735dfb7f1e9a5c78d6653d4f956600179425762\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 11:57:14.651047 containerd[1874]: time="2025-05-15T11:57:14.650982718Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-96dfc677d-wk2vl,Uid:69a43828-ae35-4eb0-a0e4-2f4a0dfd7a47,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f70618bcd2e8f43520bce7f59128d67f265b2fabed2b4cef4c5c98d7fda41d25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 11:57:14.651565 kubelet[3279]: E0515 11:57:14.651145 3279 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f70618bcd2e8f43520bce7f59128d67f265b2fabed2b4cef4c5c98d7fda41d25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 11:57:14.651565 kubelet[3279]: E0515 11:57:14.651202 3279 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f70618bcd2e8f43520bce7f59128d67f265b2fabed2b4cef4c5c98d7fda41d25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-96dfc677d-wk2vl" May 15 11:57:14.651565 kubelet[3279]: E0515 11:57:14.651216 3279 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f70618bcd2e8f43520bce7f59128d67f265b2fabed2b4cef4c5c98d7fda41d25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-96dfc677d-wk2vl" May 15 11:57:14.651660 kubelet[3279]: E0515 11:57:14.651252 3279 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-96dfc677d-wk2vl_calico-system(69a43828-ae35-4eb0-a0e4-2f4a0dfd7a47)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-96dfc677d-wk2vl_calico-system(69a43828-ae35-4eb0-a0e4-2f4a0dfd7a47)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f70618bcd2e8f43520bce7f59128d67f265b2fabed2b4cef4c5c98d7fda41d25\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-96dfc677d-wk2vl" podUID="69a43828-ae35-4eb0-a0e4-2f4a0dfd7a47" May 15 11:57:14.655818 containerd[1874]: time="2025-05-15T11:57:14.655758448Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rvch9,Uid:db9a440f-784b-44cc-94cc-545a3468a789,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"04ad6413d458a981e97876bbc735dfb7f1e9a5c78d6653d4f956600179425762\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 11:57:14.655936 kubelet[3279]: E0515 11:57:14.655904 3279 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04ad6413d458a981e97876bbc735dfb7f1e9a5c78d6653d4f956600179425762\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 11:57:14.655998 kubelet[3279]: E0515 11:57:14.655942 3279 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04ad6413d458a981e97876bbc735dfb7f1e9a5c78d6653d4f956600179425762\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rvch9" May 15 11:57:14.655998 kubelet[3279]: E0515 11:57:14.655969 3279 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04ad6413d458a981e97876bbc735dfb7f1e9a5c78d6653d4f956600179425762\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rvch9" May 15 11:57:14.656040 kubelet[3279]: E0515 11:57:14.655998 3279 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rvch9_calico-system(db9a440f-784b-44cc-94cc-545a3468a789)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rvch9_calico-system(db9a440f-784b-44cc-94cc-545a3468a789)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"04ad6413d458a981e97876bbc735dfb7f1e9a5c78d6653d4f956600179425762\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rvch9" podUID="db9a440f-784b-44cc-94cc-545a3468a789" May 15 11:57:14.842702 containerd[1874]: time="2025-05-15T11:57:14.842672227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7rpbh,Uid:b9f8b73d-9f99-48d0-821f-bd88c2524d23,Namespace:kube-system,Attempt:0,}" May 15 11:57:14.842898 containerd[1874]: time="2025-05-15T11:57:14.842631538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddd4595cc-xmn59,Uid:c3a54e6c-f5f4-4045-86f8-f2a82e8f4c28,Namespace:calico-apiserver,Attempt:0,}" May 15 11:57:14.843045 containerd[1874]: time="2025-05-15T11:57:14.843026276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddd4595cc-xq587,Uid:3a542d65-998c-4e93-9f30-3a517d0acb36,Namespace:calico-apiserver,Attempt:0,}" May 15 11:57:14.845525 containerd[1874]: time="2025-05-15T11:57:14.845495339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jc2wm,Uid:fbb91975-1b08-4ee7-8250-0542d4c5fd5b,Namespace:kube-system,Attempt:0,}" May 15 11:57:14.926339 containerd[1874]: time="2025-05-15T11:57:14.926083197Z" level=error msg="Failed to destroy network for sandbox \"2d9858a696124c9677ba6c668a91e84181b64ef5b3c43df5bfebe4510125d093\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 11:57:14.933557 containerd[1874]: time="2025-05-15T11:57:14.933520954Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7rpbh,Uid:b9f8b73d-9f99-48d0-821f-bd88c2524d23,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d9858a696124c9677ba6c668a91e84181b64ef5b3c43df5bfebe4510125d093\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 11:57:14.933983 containerd[1874]: time="2025-05-15T11:57:14.933914948Z" level=error msg="Failed to destroy network for sandbox \"0aa6cf0a395919e164c034eedeedad3fa9e7f96725b1d3ba4121b86df0325954\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 11:57:14.934053 kubelet[3279]: E0515 11:57:14.934015 3279 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d9858a696124c9677ba6c668a91e84181b64ef5b3c43df5bfebe4510125d093\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 11:57:14.934098 kubelet[3279]: E0515 11:57:14.934084 3279 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d9858a696124c9677ba6c668a91e84181b64ef5b3c43df5bfebe4510125d093\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7rpbh" May 15 11:57:14.934123 kubelet[3279]: E0515 11:57:14.934103 3279 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d9858a696124c9677ba6c668a91e84181b64ef5b3c43df5bfebe4510125d093\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7rpbh" May 15 11:57:14.934204 kubelet[3279]: E0515 11:57:14.934174 3279 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-7rpbh_kube-system(b9f8b73d-9f99-48d0-821f-bd88c2524d23)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-7rpbh_kube-system(b9f8b73d-9f99-48d0-821f-bd88c2524d23)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d9858a696124c9677ba6c668a91e84181b64ef5b3c43df5bfebe4510125d093\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7rpbh" podUID="b9f8b73d-9f99-48d0-821f-bd88c2524d23" May 15 11:57:14.938206 containerd[1874]: time="2025-05-15T11:57:14.938179657Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddd4595cc-xmn59,Uid:c3a54e6c-f5f4-4045-86f8-f2a82e8f4c28,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0aa6cf0a395919e164c034eedeedad3fa9e7f96725b1d3ba4121b86df0325954\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 11:57:14.938647 kubelet[3279]: E0515 11:57:14.938563 3279 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0aa6cf0a395919e164c034eedeedad3fa9e7f96725b1d3ba4121b86df0325954\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 11:57:14.938647 kubelet[3279]: E0515 11:57:14.938605 3279 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0aa6cf0a395919e164c034eedeedad3fa9e7f96725b1d3ba4121b86df0325954\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ddd4595cc-xmn59" May 15 11:57:14.938647 kubelet[3279]: E0515 11:57:14.938618 3279 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0aa6cf0a395919e164c034eedeedad3fa9e7f96725b1d3ba4121b86df0325954\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ddd4595cc-xmn59" May 15 11:57:14.938840 containerd[1874]: time="2025-05-15T11:57:14.938602700Z" level=error msg="Failed to destroy network for sandbox \"eff4de7eb4c11e5e41e33a33e8fab8278771b4354af4df8ad6b9dd4a92da4b5f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 11:57:14.938993 kubelet[3279]: E0515 11:57:14.938645 3279 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5ddd4595cc-xmn59_calico-apiserver(c3a54e6c-f5f4-4045-86f8-f2a82e8f4c28)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5ddd4595cc-xmn59_calico-apiserver(c3a54e6c-f5f4-4045-86f8-f2a82e8f4c28)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0aa6cf0a395919e164c034eedeedad3fa9e7f96725b1d3ba4121b86df0325954\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5ddd4595cc-xmn59" podUID="c3a54e6c-f5f4-4045-86f8-f2a82e8f4c28" May 15 11:57:14.943812 containerd[1874]: time="2025-05-15T11:57:14.943786008Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddd4595cc-xq587,Uid:3a542d65-998c-4e93-9f30-3a517d0acb36,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eff4de7eb4c11e5e41e33a33e8fab8278771b4354af4df8ad6b9dd4a92da4b5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 11:57:14.944112 kubelet[3279]: E0515 11:57:14.944062 3279 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eff4de7eb4c11e5e41e33a33e8fab8278771b4354af4df8ad6b9dd4a92da4b5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 11:57:14.944425 kubelet[3279]: E0515 11:57:14.944196 3279 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eff4de7eb4c11e5e41e33a33e8fab8278771b4354af4df8ad6b9dd4a92da4b5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ddd4595cc-xq587" May 15 11:57:14.944425 kubelet[3279]: E0515 11:57:14.944212 3279 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eff4de7eb4c11e5e41e33a33e8fab8278771b4354af4df8ad6b9dd4a92da4b5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ddd4595cc-xq587" May 15 11:57:14.944425 kubelet[3279]: E0515 11:57:14.944372 3279 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5ddd4595cc-xq587_calico-apiserver(3a542d65-998c-4e93-9f30-3a517d0acb36)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5ddd4595cc-xq587_calico-apiserver(3a542d65-998c-4e93-9f30-3a517d0acb36)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eff4de7eb4c11e5e41e33a33e8fab8278771b4354af4df8ad6b9dd4a92da4b5f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5ddd4595cc-xq587" podUID="3a542d65-998c-4e93-9f30-3a517d0acb36" May 15 11:57:14.958234 containerd[1874]: time="2025-05-15T11:57:14.958204630Z" level=error msg="Failed to destroy network for sandbox \"a4431a484d9a4410245566a92a75cc08d7ea91983644556a2a54336bae99874f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 11:57:14.962593 containerd[1874]: time="2025-05-15T11:57:14.962526628Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jc2wm,Uid:fbb91975-1b08-4ee7-8250-0542d4c5fd5b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4431a484d9a4410245566a92a75cc08d7ea91983644556a2a54336bae99874f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 11:57:14.962767 kubelet[3279]: E0515 11:57:14.962738 3279 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4431a484d9a4410245566a92a75cc08d7ea91983644556a2a54336bae99874f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 11:57:14.962823 kubelet[3279]: E0515 11:57:14.962775 3279 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4431a484d9a4410245566a92a75cc08d7ea91983644556a2a54336bae99874f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jc2wm" May 15 11:57:14.962823 kubelet[3279]: E0515 11:57:14.962787 3279 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4431a484d9a4410245566a92a75cc08d7ea91983644556a2a54336bae99874f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jc2wm" May 15 11:57:14.962863 kubelet[3279]: E0515 11:57:14.962818 3279 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-jc2wm_kube-system(fbb91975-1b08-4ee7-8250-0542d4c5fd5b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-jc2wm_kube-system(fbb91975-1b08-4ee7-8250-0542d4c5fd5b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a4431a484d9a4410245566a92a75cc08d7ea91983644556a2a54336bae99874f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jc2wm" podUID="fbb91975-1b08-4ee7-8250-0542d4c5fd5b" May 15 11:57:15.587617 systemd[1]: run-netns-cni\x2d9d7e3678\x2d3c7b\x2dd4d5\x2d1fdc\x2d1e12bd5907c9.mount: Deactivated successfully. May 15 11:57:15.707568 containerd[1874]: time="2025-05-15T11:57:15.707538943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 15 11:57:21.128361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2604076621.mount: Deactivated successfully. May 15 11:57:21.935828 containerd[1874]: time="2025-05-15T11:57:21.935381071Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:57:21.937204 containerd[1874]: time="2025-05-15T11:57:21.937181435Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 15 11:57:21.941718 containerd[1874]: time="2025-05-15T11:57:21.941678897Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:57:21.947189 containerd[1874]: time="2025-05-15T11:57:21.947006316Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:57:21.947289 containerd[1874]: time="2025-05-15T11:57:21.947266547Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 6.23937421s" May 15 11:57:21.947348 containerd[1874]: time="2025-05-15T11:57:21.947335548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 15 11:57:21.956391 containerd[1874]: time="2025-05-15T11:57:21.956372954Z" level=info msg="CreateContainer within sandbox \"63d0c3d661a24e4ae6de3cdb141a6c7627bd4a0451b2d6c6e9706494a16a9bb3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 15 11:57:21.985760 containerd[1874]: time="2025-05-15T11:57:21.985727667Z" level=info msg="Container 3f0c1d9843808ef1e9a8f1b975da6aa4cd7d8909131c0b9a0dafbcf65dadcc71: CDI devices from CRI Config.CDIDevices: []" May 15 11:57:22.004157 containerd[1874]: time="2025-05-15T11:57:22.004125183Z" level=info msg="CreateContainer within sandbox \"63d0c3d661a24e4ae6de3cdb141a6c7627bd4a0451b2d6c6e9706494a16a9bb3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3f0c1d9843808ef1e9a8f1b975da6aa4cd7d8909131c0b9a0dafbcf65dadcc71\"" May 15 11:57:22.004705 containerd[1874]: time="2025-05-15T11:57:22.004683716Z" level=info msg="StartContainer for \"3f0c1d9843808ef1e9a8f1b975da6aa4cd7d8909131c0b9a0dafbcf65dadcc71\"" May 15 11:57:22.005864 containerd[1874]: time="2025-05-15T11:57:22.005839321Z" level=info msg="connecting to shim 3f0c1d9843808ef1e9a8f1b975da6aa4cd7d8909131c0b9a0dafbcf65dadcc71" address="unix:///run/containerd/s/ed3b9bd3ae8ed78745020a4eb141f3a9d0b75328bcc11804df5c72a5d7c524f8" protocol=ttrpc version=3 May 15 11:57:22.018544 systemd[1]: Started cri-containerd-3f0c1d9843808ef1e9a8f1b975da6aa4cd7d8909131c0b9a0dafbcf65dadcc71.scope - libcontainer container 3f0c1d9843808ef1e9a8f1b975da6aa4cd7d8909131c0b9a0dafbcf65dadcc71. May 15 11:57:22.052698 containerd[1874]: time="2025-05-15T11:57:22.052565604Z" level=info msg="StartContainer for \"3f0c1d9843808ef1e9a8f1b975da6aa4cd7d8909131c0b9a0dafbcf65dadcc71\" returns successfully" May 15 11:57:22.439053 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 15 11:57:22.439155 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 15 11:57:23.997305 systemd-networkd[1606]: vxlan.calico: Link UP May 15 11:57:23.997569 systemd-networkd[1606]: vxlan.calico: Gained carrier May 15 11:57:24.733145 kubelet[3279]: I0515 11:57:24.733109 3279 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 11:57:24.781633 containerd[1874]: time="2025-05-15T11:57:24.781597187Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3f0c1d9843808ef1e9a8f1b975da6aa4cd7d8909131c0b9a0dafbcf65dadcc71\" id:\"52cac4490a1509cc84fc1ebebca4901eef3fa24405288b7d25d0dac77d26ea6b\" pid:4598 exit_status:1 exited_at:{seconds:1747310244 nanos:781341437}" May 15 11:57:24.825616 containerd[1874]: time="2025-05-15T11:57:24.825582219Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3f0c1d9843808ef1e9a8f1b975da6aa4cd7d8909131c0b9a0dafbcf65dadcc71\" id:\"e58e9d0c40ec1958b3d0cd794c2c71e26aed84f5deac2b7e369c422d6cf4d710\" pid:4622 exit_status:1 exited_at:{seconds:1747310244 nanos:825387262}" May 15 11:57:25.623933 containerd[1874]: time="2025-05-15T11:57:25.623861011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jc2wm,Uid:fbb91975-1b08-4ee7-8250-0542d4c5fd5b,Namespace:kube-system,Attempt:0,}" May 15 11:57:25.725765 systemd-networkd[1606]: caliecc707b9d17: Link UP May 15 11:57:25.725926 systemd-networkd[1606]: caliecc707b9d17: Gained carrier May 15 11:57:25.735892 kubelet[3279]: I0515 11:57:25.735559 3279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qnppn" podStartSLOduration=4.585986858 podStartE2EDuration="24.735542353s" podCreationTimestamp="2025-05-15 11:57:01 +0000 UTC" firstStartedPulling="2025-05-15 11:57:01.79848167 +0000 UTC m=+13.246312072" lastFinishedPulling="2025-05-15 11:57:21.948037157 +0000 UTC m=+33.395867567" observedRunningTime="2025-05-15 11:57:22.739053019 +0000 UTC m=+34.186883429" watchObservedRunningTime="2025-05-15 11:57:25.735542353 +0000 UTC m=+37.183372755" May 15 11:57:25.738521 containerd[1874]: 2025-05-15 11:57:25.671 [INFO][4636] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4334.0.0--a--59732b8df3-k8s-coredns--668d6bf9bc--jc2wm-eth0 coredns-668d6bf9bc- kube-system fbb91975-1b08-4ee7-8250-0542d4c5fd5b 677 0 2025-05-15 11:56:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4334.0.0-a-59732b8df3 coredns-668d6bf9bc-jc2wm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliecc707b9d17 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b9e146fde47eaf6fb6fe29f751863b7ad69995cc7e2e1a416d804f7ffa5a1113" Namespace="kube-system" Pod="coredns-668d6bf9bc-jc2wm" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-coredns--668d6bf9bc--jc2wm-" May 15 11:57:25.738521 containerd[1874]: 2025-05-15 11:57:25.671 [INFO][4636] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b9e146fde47eaf6fb6fe29f751863b7ad69995cc7e2e1a416d804f7ffa5a1113" Namespace="kube-system" Pod="coredns-668d6bf9bc-jc2wm" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-coredns--668d6bf9bc--jc2wm-eth0" May 15 11:57:25.738521 containerd[1874]: 2025-05-15 11:57:25.692 [INFO][4648] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b9e146fde47eaf6fb6fe29f751863b7ad69995cc7e2e1a416d804f7ffa5a1113" HandleID="k8s-pod-network.b9e146fde47eaf6fb6fe29f751863b7ad69995cc7e2e1a416d804f7ffa5a1113" Workload="ci--4334.0.0--a--59732b8df3-k8s-coredns--668d6bf9bc--jc2wm-eth0" May 15 11:57:25.738649 containerd[1874]: 2025-05-15 11:57:25.698 [INFO][4648] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b9e146fde47eaf6fb6fe29f751863b7ad69995cc7e2e1a416d804f7ffa5a1113" HandleID="k8s-pod-network.b9e146fde47eaf6fb6fe29f751863b7ad69995cc7e2e1a416d804f7ffa5a1113" Workload="ci--4334.0.0--a--59732b8df3-k8s-coredns--668d6bf9bc--jc2wm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000384ae0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4334.0.0-a-59732b8df3", "pod":"coredns-668d6bf9bc-jc2wm", "timestamp":"2025-05-15 11:57:25.692292403 +0000 UTC"}, Hostname:"ci-4334.0.0-a-59732b8df3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 11:57:25.738649 containerd[1874]: 2025-05-15 11:57:25.698 [INFO][4648] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 11:57:25.738649 containerd[1874]: 2025-05-15 11:57:25.698 [INFO][4648] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 11:57:25.738649 containerd[1874]: 2025-05-15 11:57:25.698 [INFO][4648] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4334.0.0-a-59732b8df3' May 15 11:57:25.738649 containerd[1874]: 2025-05-15 11:57:25.700 [INFO][4648] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b9e146fde47eaf6fb6fe29f751863b7ad69995cc7e2e1a416d804f7ffa5a1113" host="ci-4334.0.0-a-59732b8df3" May 15 11:57:25.738649 containerd[1874]: 2025-05-15 11:57:25.702 [INFO][4648] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4334.0.0-a-59732b8df3" May 15 11:57:25.738649 containerd[1874]: 2025-05-15 11:57:25.705 [INFO][4648] ipam/ipam.go 489: Trying affinity for 192.168.62.192/26 host="ci-4334.0.0-a-59732b8df3" May 15 11:57:25.738649 containerd[1874]: 2025-05-15 11:57:25.707 [INFO][4648] ipam/ipam.go 155: Attempting to load block cidr=192.168.62.192/26 host="ci-4334.0.0-a-59732b8df3" May 15 11:57:25.738649 containerd[1874]: 2025-05-15 11:57:25.708 [INFO][4648] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.62.192/26 host="ci-4334.0.0-a-59732b8df3" May 15 11:57:25.738861 containerd[1874]: 2025-05-15 11:57:25.708 [INFO][4648] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.62.192/26 handle="k8s-pod-network.b9e146fde47eaf6fb6fe29f751863b7ad69995cc7e2e1a416d804f7ffa5a1113" host="ci-4334.0.0-a-59732b8df3" May 15 11:57:25.738861 containerd[1874]: 2025-05-15 11:57:25.709 [INFO][4648] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b9e146fde47eaf6fb6fe29f751863b7ad69995cc7e2e1a416d804f7ffa5a1113 May 15 11:57:25.738861 containerd[1874]: 2025-05-15 11:57:25.712 [INFO][4648] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.62.192/26 handle="k8s-pod-network.b9e146fde47eaf6fb6fe29f751863b7ad69995cc7e2e1a416d804f7ffa5a1113" host="ci-4334.0.0-a-59732b8df3" May 15 11:57:25.738861 containerd[1874]: 2025-05-15 11:57:25.718 [INFO][4648] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.62.193/26] block=192.168.62.192/26 handle="k8s-pod-network.b9e146fde47eaf6fb6fe29f751863b7ad69995cc7e2e1a416d804f7ffa5a1113" host="ci-4334.0.0-a-59732b8df3" May 15 11:57:25.738861 containerd[1874]: 2025-05-15 11:57:25.718 [INFO][4648] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.62.193/26] handle="k8s-pod-network.b9e146fde47eaf6fb6fe29f751863b7ad69995cc7e2e1a416d804f7ffa5a1113" host="ci-4334.0.0-a-59732b8df3" May 15 11:57:25.738861 containerd[1874]: 2025-05-15 11:57:25.718 [INFO][4648] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 11:57:25.738861 containerd[1874]: 2025-05-15 11:57:25.718 [INFO][4648] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.193/26] IPv6=[] ContainerID="b9e146fde47eaf6fb6fe29f751863b7ad69995cc7e2e1a416d804f7ffa5a1113" HandleID="k8s-pod-network.b9e146fde47eaf6fb6fe29f751863b7ad69995cc7e2e1a416d804f7ffa5a1113" Workload="ci--4334.0.0--a--59732b8df3-k8s-coredns--668d6bf9bc--jc2wm-eth0" May 15 11:57:25.738957 containerd[1874]: 2025-05-15 11:57:25.721 [INFO][4636] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b9e146fde47eaf6fb6fe29f751863b7ad69995cc7e2e1a416d804f7ffa5a1113" Namespace="kube-system" Pod="coredns-668d6bf9bc-jc2wm" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-coredns--668d6bf9bc--jc2wm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--59732b8df3-k8s-coredns--668d6bf9bc--jc2wm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fbb91975-1b08-4ee7-8250-0542d4c5fd5b", ResourceVersion:"677", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 11, 56, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-59732b8df3", ContainerID:"", Pod:"coredns-668d6bf9bc-jc2wm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliecc707b9d17", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 11:57:25.738957 containerd[1874]: 2025-05-15 11:57:25.721 [INFO][4636] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.62.193/32] ContainerID="b9e146fde47eaf6fb6fe29f751863b7ad69995cc7e2e1a416d804f7ffa5a1113" Namespace="kube-system" Pod="coredns-668d6bf9bc-jc2wm" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-coredns--668d6bf9bc--jc2wm-eth0" May 15 11:57:25.738957 containerd[1874]: 2025-05-15 11:57:25.721 [INFO][4636] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliecc707b9d17 ContainerID="b9e146fde47eaf6fb6fe29f751863b7ad69995cc7e2e1a416d804f7ffa5a1113" Namespace="kube-system" Pod="coredns-668d6bf9bc-jc2wm" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-coredns--668d6bf9bc--jc2wm-eth0" May 15 11:57:25.738957 containerd[1874]: 2025-05-15 11:57:25.723 [INFO][4636] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b9e146fde47eaf6fb6fe29f751863b7ad69995cc7e2e1a416d804f7ffa5a1113" Namespace="kube-system" Pod="coredns-668d6bf9bc-jc2wm" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-coredns--668d6bf9bc--jc2wm-eth0" May 15 11:57:25.738957 containerd[1874]: 2025-05-15 11:57:25.724 [INFO][4636] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b9e146fde47eaf6fb6fe29f751863b7ad69995cc7e2e1a416d804f7ffa5a1113" Namespace="kube-system" Pod="coredns-668d6bf9bc-jc2wm" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-coredns--668d6bf9bc--jc2wm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--59732b8df3-k8s-coredns--668d6bf9bc--jc2wm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fbb91975-1b08-4ee7-8250-0542d4c5fd5b", ResourceVersion:"677", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 11, 56, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-59732b8df3", ContainerID:"b9e146fde47eaf6fb6fe29f751863b7ad69995cc7e2e1a416d804f7ffa5a1113", Pod:"coredns-668d6bf9bc-jc2wm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliecc707b9d17", MAC:"82:19:70:2a:19:b3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 11:57:25.738957 containerd[1874]: 2025-05-15 11:57:25.735 [INFO][4636] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b9e146fde47eaf6fb6fe29f751863b7ad69995cc7e2e1a416d804f7ffa5a1113" Namespace="kube-system" Pod="coredns-668d6bf9bc-jc2wm" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-coredns--668d6bf9bc--jc2wm-eth0" May 15 11:57:25.781048 containerd[1874]: time="2025-05-15T11:57:25.781016102Z" level=info msg="connecting to shim b9e146fde47eaf6fb6fe29f751863b7ad69995cc7e2e1a416d804f7ffa5a1113" address="unix:///run/containerd/s/0ac22fcb9c286b25a23e3655335d4678674b2345cd670ecc632f8fc046a1c562" namespace=k8s.io protocol=ttrpc version=3 May 15 11:57:25.801568 systemd[1]: Started cri-containerd-b9e146fde47eaf6fb6fe29f751863b7ad69995cc7e2e1a416d804f7ffa5a1113.scope - libcontainer container b9e146fde47eaf6fb6fe29f751863b7ad69995cc7e2e1a416d804f7ffa5a1113. May 15 11:57:25.826471 containerd[1874]: time="2025-05-15T11:57:25.826422113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jc2wm,Uid:fbb91975-1b08-4ee7-8250-0542d4c5fd5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9e146fde47eaf6fb6fe29f751863b7ad69995cc7e2e1a416d804f7ffa5a1113\"" May 15 11:57:25.830060 containerd[1874]: time="2025-05-15T11:57:25.830035537Z" level=info msg="CreateContainer within sandbox \"b9e146fde47eaf6fb6fe29f751863b7ad69995cc7e2e1a416d804f7ffa5a1113\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 11:57:25.853467 containerd[1874]: time="2025-05-15T11:57:25.853429568Z" level=info msg="Container 0b6e88f38a00d1dcc7b9c5a036322110b179b4aa4d583e123c49242ac4765d9d: CDI devices from CRI Config.CDIDevices: []" May 15 11:57:25.864355 containerd[1874]: time="2025-05-15T11:57:25.864325067Z" level=info msg="CreateContainer within sandbox \"b9e146fde47eaf6fb6fe29f751863b7ad69995cc7e2e1a416d804f7ffa5a1113\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0b6e88f38a00d1dcc7b9c5a036322110b179b4aa4d583e123c49242ac4765d9d\"" May 15 11:57:25.865656 containerd[1874]: time="2025-05-15T11:57:25.865635132Z" level=info msg="StartContainer for \"0b6e88f38a00d1dcc7b9c5a036322110b179b4aa4d583e123c49242ac4765d9d\"" May 15 11:57:25.866350 containerd[1874]: time="2025-05-15T11:57:25.866328645Z" level=info msg="connecting to shim 0b6e88f38a00d1dcc7b9c5a036322110b179b4aa4d583e123c49242ac4765d9d" address="unix:///run/containerd/s/0ac22fcb9c286b25a23e3655335d4678674b2345cd670ecc632f8fc046a1c562" protocol=ttrpc version=3 May 15 11:57:25.881553 systemd[1]: Started cri-containerd-0b6e88f38a00d1dcc7b9c5a036322110b179b4aa4d583e123c49242ac4765d9d.scope - libcontainer container 0b6e88f38a00d1dcc7b9c5a036322110b179b4aa4d583e123c49242ac4765d9d. May 15 11:57:25.907106 containerd[1874]: time="2025-05-15T11:57:25.907079757Z" level=info msg="StartContainer for \"0b6e88f38a00d1dcc7b9c5a036322110b179b4aa4d583e123c49242ac4765d9d\" returns successfully" May 15 11:57:26.028597 systemd-networkd[1606]: vxlan.calico: Gained IPv6LL May 15 11:57:26.640042 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1136908957.mount: Deactivated successfully. May 15 11:57:26.743070 kubelet[3279]: I0515 11:57:26.742509 3279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-jc2wm" podStartSLOduration=32.742495582 podStartE2EDuration="32.742495582s" podCreationTimestamp="2025-05-15 11:56:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 11:57:26.741951096 +0000 UTC m=+38.189781506" watchObservedRunningTime="2025-05-15 11:57:26.742495582 +0000 UTC m=+38.190325992" May 15 11:57:27.623993 containerd[1874]: time="2025-05-15T11:57:27.623952842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddd4595cc-xmn59,Uid:c3a54e6c-f5f4-4045-86f8-f2a82e8f4c28,Namespace:calico-apiserver,Attempt:0,}" May 15 11:57:27.624411 containerd[1874]: time="2025-05-15T11:57:27.624094742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddd4595cc-xq587,Uid:3a542d65-998c-4e93-9f30-3a517d0acb36,Namespace:calico-apiserver,Attempt:0,}" May 15 11:57:27.624699 containerd[1874]: time="2025-05-15T11:57:27.623970131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-96dfc677d-wk2vl,Uid:69a43828-ae35-4eb0-a0e4-2f4a0dfd7a47,Namespace:calico-system,Attempt:0,}" May 15 11:57:27.624699 containerd[1874]: time="2025-05-15T11:57:27.624638427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7rpbh,Uid:b9f8b73d-9f99-48d0-821f-bd88c2524d23,Namespace:kube-system,Attempt:0,}" May 15 11:57:27.692725 systemd-networkd[1606]: caliecc707b9d17: Gained IPv6LL May 15 11:57:27.788295 systemd-networkd[1606]: cali61419e5d0d7: Link UP May 15 11:57:27.788998 systemd-networkd[1606]: cali61419e5d0d7: Gained carrier May 15 11:57:27.801421 containerd[1874]: 2025-05-15 11:57:27.683 [INFO][4751] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4334.0.0--a--59732b8df3-k8s-calico--apiserver--5ddd4595cc--xmn59-eth0 calico-apiserver-5ddd4595cc- calico-apiserver c3a54e6c-f5f4-4045-86f8-f2a82e8f4c28 678 0 2025-05-15 11:57:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5ddd4595cc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4334.0.0-a-59732b8df3 calico-apiserver-5ddd4595cc-xmn59 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali61419e5d0d7 [] []}} ContainerID="49705ab9aad5630b9ef1380f1163180ba703fc842475209ed75b0d46d2dd99eb" Namespace="calico-apiserver" Pod="calico-apiserver-5ddd4595cc-xmn59" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-calico--apiserver--5ddd4595cc--xmn59-" May 15 11:57:27.801421 containerd[1874]: 2025-05-15 11:57:27.683 [INFO][4751] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="49705ab9aad5630b9ef1380f1163180ba703fc842475209ed75b0d46d2dd99eb" Namespace="calico-apiserver" Pod="calico-apiserver-5ddd4595cc-xmn59" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-calico--apiserver--5ddd4595cc--xmn59-eth0" May 15 11:57:27.801421 containerd[1874]: 2025-05-15 11:57:27.725 [INFO][4795] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="49705ab9aad5630b9ef1380f1163180ba703fc842475209ed75b0d46d2dd99eb" HandleID="k8s-pod-network.49705ab9aad5630b9ef1380f1163180ba703fc842475209ed75b0d46d2dd99eb" Workload="ci--4334.0.0--a--59732b8df3-k8s-calico--apiserver--5ddd4595cc--xmn59-eth0" May 15 11:57:27.801421 containerd[1874]: 2025-05-15 11:57:27.739 [INFO][4795] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="49705ab9aad5630b9ef1380f1163180ba703fc842475209ed75b0d46d2dd99eb" HandleID="k8s-pod-network.49705ab9aad5630b9ef1380f1163180ba703fc842475209ed75b0d46d2dd99eb" Workload="ci--4334.0.0--a--59732b8df3-k8s-calico--apiserver--5ddd4595cc--xmn59-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028cd60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4334.0.0-a-59732b8df3", "pod":"calico-apiserver-5ddd4595cc-xmn59", "timestamp":"2025-05-15 11:57:27.725079702 +0000 UTC"}, Hostname:"ci-4334.0.0-a-59732b8df3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 11:57:27.801421 containerd[1874]: 2025-05-15 11:57:27.739 [INFO][4795] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 11:57:27.801421 containerd[1874]: 2025-05-15 11:57:27.740 [INFO][4795] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 11:57:27.801421 containerd[1874]: 2025-05-15 11:57:27.740 [INFO][4795] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4334.0.0-a-59732b8df3' May 15 11:57:27.801421 containerd[1874]: 2025-05-15 11:57:27.745 [INFO][4795] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.49705ab9aad5630b9ef1380f1163180ba703fc842475209ed75b0d46d2dd99eb" host="ci-4334.0.0-a-59732b8df3" May 15 11:57:27.801421 containerd[1874]: 2025-05-15 11:57:27.750 [INFO][4795] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4334.0.0-a-59732b8df3" May 15 11:57:27.801421 containerd[1874]: 2025-05-15 11:57:27.757 [INFO][4795] ipam/ipam.go 489: Trying affinity for 192.168.62.192/26 host="ci-4334.0.0-a-59732b8df3" May 15 11:57:27.801421 containerd[1874]: 2025-05-15 11:57:27.762 [INFO][4795] ipam/ipam.go 155: Attempting to load block cidr=192.168.62.192/26 host="ci-4334.0.0-a-59732b8df3" May 15 11:57:27.801421 containerd[1874]: 2025-05-15 11:57:27.765 [INFO][4795] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.62.192/26 host="ci-4334.0.0-a-59732b8df3" May 15 11:57:27.801421 containerd[1874]: 2025-05-15 11:57:27.765 [INFO][4795] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.62.192/26 handle="k8s-pod-network.49705ab9aad5630b9ef1380f1163180ba703fc842475209ed75b0d46d2dd99eb" host="ci-4334.0.0-a-59732b8df3" May 15 11:57:27.801421 containerd[1874]: 2025-05-15 11:57:27.768 [INFO][4795] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.49705ab9aad5630b9ef1380f1163180ba703fc842475209ed75b0d46d2dd99eb May 15 11:57:27.801421 containerd[1874]: 2025-05-15 11:57:27.777 [INFO][4795] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.62.192/26 handle="k8s-pod-network.49705ab9aad5630b9ef1380f1163180ba703fc842475209ed75b0d46d2dd99eb" host="ci-4334.0.0-a-59732b8df3" May 15 11:57:27.801421 containerd[1874]: 2025-05-15 11:57:27.783 [INFO][4795] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.62.194/26] block=192.168.62.192/26 handle="k8s-pod-network.49705ab9aad5630b9ef1380f1163180ba703fc842475209ed75b0d46d2dd99eb" host="ci-4334.0.0-a-59732b8df3" May 15 11:57:27.801421 containerd[1874]: 2025-05-15 11:57:27.783 [INFO][4795] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.62.194/26] handle="k8s-pod-network.49705ab9aad5630b9ef1380f1163180ba703fc842475209ed75b0d46d2dd99eb" host="ci-4334.0.0-a-59732b8df3" May 15 11:57:27.801421 containerd[1874]: 2025-05-15 11:57:27.783 [INFO][4795] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 11:57:27.801421 containerd[1874]: 2025-05-15 11:57:27.784 [INFO][4795] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.194/26] IPv6=[] ContainerID="49705ab9aad5630b9ef1380f1163180ba703fc842475209ed75b0d46d2dd99eb" HandleID="k8s-pod-network.49705ab9aad5630b9ef1380f1163180ba703fc842475209ed75b0d46d2dd99eb" Workload="ci--4334.0.0--a--59732b8df3-k8s-calico--apiserver--5ddd4595cc--xmn59-eth0" May 15 11:57:27.801927 containerd[1874]: 2025-05-15 11:57:27.785 [INFO][4751] cni-plugin/k8s.go 386: Populated endpoint ContainerID="49705ab9aad5630b9ef1380f1163180ba703fc842475209ed75b0d46d2dd99eb" Namespace="calico-apiserver" Pod="calico-apiserver-5ddd4595cc-xmn59" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-calico--apiserver--5ddd4595cc--xmn59-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--59732b8df3-k8s-calico--apiserver--5ddd4595cc--xmn59-eth0", GenerateName:"calico-apiserver-5ddd4595cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"c3a54e6c-f5f4-4045-86f8-f2a82e8f4c28", ResourceVersion:"678", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 11, 57, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ddd4595cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-59732b8df3", ContainerID:"", Pod:"calico-apiserver-5ddd4595cc-xmn59", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali61419e5d0d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 11:57:27.801927 containerd[1874]: 2025-05-15 11:57:27.785 [INFO][4751] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.62.194/32] ContainerID="49705ab9aad5630b9ef1380f1163180ba703fc842475209ed75b0d46d2dd99eb" Namespace="calico-apiserver" Pod="calico-apiserver-5ddd4595cc-xmn59" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-calico--apiserver--5ddd4595cc--xmn59-eth0" May 15 11:57:27.801927 containerd[1874]: 2025-05-15 11:57:27.786 [INFO][4751] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali61419e5d0d7 ContainerID="49705ab9aad5630b9ef1380f1163180ba703fc842475209ed75b0d46d2dd99eb" Namespace="calico-apiserver" Pod="calico-apiserver-5ddd4595cc-xmn59" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-calico--apiserver--5ddd4595cc--xmn59-eth0" May 15 11:57:27.801927 containerd[1874]: 2025-05-15 11:57:27.790 [INFO][4751] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="49705ab9aad5630b9ef1380f1163180ba703fc842475209ed75b0d46d2dd99eb" Namespace="calico-apiserver" Pod="calico-apiserver-5ddd4595cc-xmn59" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-calico--apiserver--5ddd4595cc--xmn59-eth0" May 15 11:57:27.801927 containerd[1874]: 2025-05-15 11:57:27.790 [INFO][4751] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="49705ab9aad5630b9ef1380f1163180ba703fc842475209ed75b0d46d2dd99eb" Namespace="calico-apiserver" Pod="calico-apiserver-5ddd4595cc-xmn59" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-calico--apiserver--5ddd4595cc--xmn59-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--59732b8df3-k8s-calico--apiserver--5ddd4595cc--xmn59-eth0", GenerateName:"calico-apiserver-5ddd4595cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"c3a54e6c-f5f4-4045-86f8-f2a82e8f4c28", ResourceVersion:"678", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 11, 57, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ddd4595cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-59732b8df3", ContainerID:"49705ab9aad5630b9ef1380f1163180ba703fc842475209ed75b0d46d2dd99eb", Pod:"calico-apiserver-5ddd4595cc-xmn59", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali61419e5d0d7", MAC:"5e:cc:f7:81:f9:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 11:57:27.801927 containerd[1874]: 2025-05-15 11:57:27.798 [INFO][4751] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="49705ab9aad5630b9ef1380f1163180ba703fc842475209ed75b0d46d2dd99eb" Namespace="calico-apiserver" Pod="calico-apiserver-5ddd4595cc-xmn59" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-calico--apiserver--5ddd4595cc--xmn59-eth0" May 15 11:57:27.873950 systemd-networkd[1606]: cali5ee1dc0a185: Link UP May 15 11:57:27.875031 systemd-networkd[1606]: cali5ee1dc0a185: Gained carrier May 15 11:57:27.890447 containerd[1874]: 2025-05-15 11:57:27.716 [INFO][4784] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4334.0.0--a--59732b8df3-k8s-coredns--668d6bf9bc--7rpbh-eth0 coredns-668d6bf9bc- kube-system b9f8b73d-9f99-48d0-821f-bd88c2524d23 675 0 2025-05-15 11:56:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4334.0.0-a-59732b8df3 coredns-668d6bf9bc-7rpbh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5ee1dc0a185 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1e28a35c88cefb7be16d9f97f0c8f8a93a6c1e3d753e803c31c0e42eef609bef" Namespace="kube-system" Pod="coredns-668d6bf9bc-7rpbh" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-coredns--668d6bf9bc--7rpbh-" May 15 11:57:27.890447 containerd[1874]: 2025-05-15 11:57:27.716 [INFO][4784] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1e28a35c88cefb7be16d9f97f0c8f8a93a6c1e3d753e803c31c0e42eef609bef" Namespace="kube-system" Pod="coredns-668d6bf9bc-7rpbh" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-coredns--668d6bf9bc--7rpbh-eth0" May 15 11:57:27.890447 containerd[1874]: 2025-05-15 11:57:27.757 [INFO][4810] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1e28a35c88cefb7be16d9f97f0c8f8a93a6c1e3d753e803c31c0e42eef609bef" HandleID="k8s-pod-network.1e28a35c88cefb7be16d9f97f0c8f8a93a6c1e3d753e803c31c0e42eef609bef" Workload="ci--4334.0.0--a--59732b8df3-k8s-coredns--668d6bf9bc--7rpbh-eth0" May 15 11:57:27.890447 containerd[1874]: 2025-05-15 11:57:27.767 [INFO][4810] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1e28a35c88cefb7be16d9f97f0c8f8a93a6c1e3d753e803c31c0e42eef609bef" HandleID="k8s-pod-network.1e28a35c88cefb7be16d9f97f0c8f8a93a6c1e3d753e803c31c0e42eef609bef" Workload="ci--4334.0.0--a--59732b8df3-k8s-coredns--668d6bf9bc--7rpbh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000312ae0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4334.0.0-a-59732b8df3", "pod":"coredns-668d6bf9bc-7rpbh", "timestamp":"2025-05-15 11:57:27.757276362 +0000 UTC"}, Hostname:"ci-4334.0.0-a-59732b8df3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 11:57:27.890447 containerd[1874]: 2025-05-15 11:57:27.767 [INFO][4810] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 11:57:27.890447 containerd[1874]: 2025-05-15 11:57:27.783 [INFO][4810] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 11:57:27.890447 containerd[1874]: 2025-05-15 11:57:27.784 [INFO][4810] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4334.0.0-a-59732b8df3' May 15 11:57:27.890447 containerd[1874]: 2025-05-15 11:57:27.845 [INFO][4810] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1e28a35c88cefb7be16d9f97f0c8f8a93a6c1e3d753e803c31c0e42eef609bef" host="ci-4334.0.0-a-59732b8df3" May 15 11:57:27.890447 containerd[1874]: 2025-05-15 11:57:27.848 [INFO][4810] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4334.0.0-a-59732b8df3" May 15 11:57:27.890447 containerd[1874]: 2025-05-15 11:57:27.853 [INFO][4810] ipam/ipam.go 489: Trying affinity for 192.168.62.192/26 host="ci-4334.0.0-a-59732b8df3" May 15 11:57:27.890447 containerd[1874]: 2025-05-15 11:57:27.856 [INFO][4810] ipam/ipam.go 155: Attempting to load block cidr=192.168.62.192/26 host="ci-4334.0.0-a-59732b8df3" May 15 11:57:27.890447 containerd[1874]: 2025-05-15 11:57:27.857 [INFO][4810] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.62.192/26 host="ci-4334.0.0-a-59732b8df3" May 15 11:57:27.890447 containerd[1874]: 2025-05-15 11:57:27.857 [INFO][4810] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.62.192/26 handle="k8s-pod-network.1e28a35c88cefb7be16d9f97f0c8f8a93a6c1e3d753e803c31c0e42eef609bef" host="ci-4334.0.0-a-59732b8df3" May 15 11:57:27.890447 containerd[1874]: 2025-05-15 11:57:27.858 [INFO][4810] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1e28a35c88cefb7be16d9f97f0c8f8a93a6c1e3d753e803c31c0e42eef609bef May 15 11:57:27.890447 containerd[1874]: 2025-05-15 11:57:27.862 [INFO][4810] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.62.192/26 handle="k8s-pod-network.1e28a35c88cefb7be16d9f97f0c8f8a93a6c1e3d753e803c31c0e42eef609bef" host="ci-4334.0.0-a-59732b8df3" May 15 11:57:27.890447 containerd[1874]: 2025-05-15 11:57:27.868 [INFO][4810] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.62.195/26] block=192.168.62.192/26 handle="k8s-pod-network.1e28a35c88cefb7be16d9f97f0c8f8a93a6c1e3d753e803c31c0e42eef609bef" host="ci-4334.0.0-a-59732b8df3" May 15 11:57:27.890447 containerd[1874]: 2025-05-15 11:57:27.868 [INFO][4810] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.62.195/26] handle="k8s-pod-network.1e28a35c88cefb7be16d9f97f0c8f8a93a6c1e3d753e803c31c0e42eef609bef" host="ci-4334.0.0-a-59732b8df3" May 15 11:57:27.890447 containerd[1874]: 2025-05-15 11:57:27.868 [INFO][4810] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 11:57:27.890447 containerd[1874]: 2025-05-15 11:57:27.868 [INFO][4810] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.195/26] IPv6=[] ContainerID="1e28a35c88cefb7be16d9f97f0c8f8a93a6c1e3d753e803c31c0e42eef609bef" HandleID="k8s-pod-network.1e28a35c88cefb7be16d9f97f0c8f8a93a6c1e3d753e803c31c0e42eef609bef" Workload="ci--4334.0.0--a--59732b8df3-k8s-coredns--668d6bf9bc--7rpbh-eth0" May 15 11:57:27.891545 containerd[1874]: 2025-05-15 11:57:27.870 [INFO][4784] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1e28a35c88cefb7be16d9f97f0c8f8a93a6c1e3d753e803c31c0e42eef609bef" Namespace="kube-system" Pod="coredns-668d6bf9bc-7rpbh" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-coredns--668d6bf9bc--7rpbh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--59732b8df3-k8s-coredns--668d6bf9bc--7rpbh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b9f8b73d-9f99-48d0-821f-bd88c2524d23", ResourceVersion:"675", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 11, 56, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-59732b8df3", ContainerID:"", Pod:"coredns-668d6bf9bc-7rpbh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ee1dc0a185", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 11:57:27.891545 containerd[1874]: 2025-05-15 11:57:27.871 [INFO][4784] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.62.195/32] ContainerID="1e28a35c88cefb7be16d9f97f0c8f8a93a6c1e3d753e803c31c0e42eef609bef" Namespace="kube-system" Pod="coredns-668d6bf9bc-7rpbh" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-coredns--668d6bf9bc--7rpbh-eth0" May 15 11:57:27.891545 containerd[1874]: 2025-05-15 11:57:27.871 [INFO][4784] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ee1dc0a185 ContainerID="1e28a35c88cefb7be16d9f97f0c8f8a93a6c1e3d753e803c31c0e42eef609bef" Namespace="kube-system" Pod="coredns-668d6bf9bc-7rpbh" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-coredns--668d6bf9bc--7rpbh-eth0" May 15 11:57:27.891545 containerd[1874]: 2025-05-15 11:57:27.875 [INFO][4784] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1e28a35c88cefb7be16d9f97f0c8f8a93a6c1e3d753e803c31c0e42eef609bef" Namespace="kube-system" Pod="coredns-668d6bf9bc-7rpbh" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-coredns--668d6bf9bc--7rpbh-eth0" May 15 11:57:27.891545 containerd[1874]: 2025-05-15 11:57:27.876 [INFO][4784] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1e28a35c88cefb7be16d9f97f0c8f8a93a6c1e3d753e803c31c0e42eef609bef" Namespace="kube-system" Pod="coredns-668d6bf9bc-7rpbh" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-coredns--668d6bf9bc--7rpbh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--59732b8df3-k8s-coredns--668d6bf9bc--7rpbh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b9f8b73d-9f99-48d0-821f-bd88c2524d23", ResourceVersion:"675", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 11, 56, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-59732b8df3", ContainerID:"1e28a35c88cefb7be16d9f97f0c8f8a93a6c1e3d753e803c31c0e42eef609bef", Pod:"coredns-668d6bf9bc-7rpbh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ee1dc0a185", MAC:"5e:e4:1f:bc:f2:d3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 11:57:27.891545 containerd[1874]: 2025-05-15 11:57:27.887 [INFO][4784] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1e28a35c88cefb7be16d9f97f0c8f8a93a6c1e3d753e803c31c0e42eef609bef" Namespace="kube-system" Pod="coredns-668d6bf9bc-7rpbh" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-coredns--668d6bf9bc--7rpbh-eth0" May 15 11:57:27.892624 containerd[1874]: time="2025-05-15T11:57:27.892201497Z" level=info msg="connecting to shim 49705ab9aad5630b9ef1380f1163180ba703fc842475209ed75b0d46d2dd99eb" address="unix:///run/containerd/s/b22aa5c21dfe4f5fd6b44f48edf067a064f0b7b75af5fbd4da6906e5ae8da46d" namespace=k8s.io protocol=ttrpc version=3 May 15 11:57:27.915556 systemd[1]: Started cri-containerd-49705ab9aad5630b9ef1380f1163180ba703fc842475209ed75b0d46d2dd99eb.scope - libcontainer container 49705ab9aad5630b9ef1380f1163180ba703fc842475209ed75b0d46d2dd99eb. May 15 11:57:27.940825 containerd[1874]: time="2025-05-15T11:57:27.940428790Z" level=info msg="connecting to shim 1e28a35c88cefb7be16d9f97f0c8f8a93a6c1e3d753e803c31c0e42eef609bef" address="unix:///run/containerd/s/f62506213004065a8241a586f8248e31300b7f885ae2933ea542ecb66fccb461" namespace=k8s.io protocol=ttrpc version=3 May 15 11:57:27.951700 containerd[1874]: time="2025-05-15T11:57:27.951628232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddd4595cc-xmn59,Uid:c3a54e6c-f5f4-4045-86f8-f2a82e8f4c28,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"49705ab9aad5630b9ef1380f1163180ba703fc842475209ed75b0d46d2dd99eb\"" May 15 11:57:27.955804 containerd[1874]: time="2025-05-15T11:57:27.955786982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 15 11:57:27.967567 systemd[1]: Started cri-containerd-1e28a35c88cefb7be16d9f97f0c8f8a93a6c1e3d753e803c31c0e42eef609bef.scope - libcontainer container 1e28a35c88cefb7be16d9f97f0c8f8a93a6c1e3d753e803c31c0e42eef609bef. May 15 11:57:27.991624 systemd-networkd[1606]: calic3a74b916b5: Link UP May 15 11:57:27.992501 systemd-networkd[1606]: calic3a74b916b5: Gained carrier May 15 11:57:28.007546 containerd[1874]: 2025-05-15 11:57:27.720 [INFO][4762] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4334.0.0--a--59732b8df3-k8s-calico--apiserver--5ddd4595cc--xq587-eth0 calico-apiserver-5ddd4595cc- calico-apiserver 3a542d65-998c-4e93-9f30-3a517d0acb36 676 0 2025-05-15 11:57:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5ddd4595cc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4334.0.0-a-59732b8df3 calico-apiserver-5ddd4595cc-xq587 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic3a74b916b5 [] []}} ContainerID="f906bda2ecf8a4660b1549a04fe03d88f9a2671ac18a8809648fc6d876174ac6" Namespace="calico-apiserver" Pod="calico-apiserver-5ddd4595cc-xq587" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-calico--apiserver--5ddd4595cc--xq587-" May 15 11:57:28.007546 containerd[1874]: 2025-05-15 11:57:27.720 [INFO][4762] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f906bda2ecf8a4660b1549a04fe03d88f9a2671ac18a8809648fc6d876174ac6" Namespace="calico-apiserver" Pod="calico-apiserver-5ddd4595cc-xq587" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-calico--apiserver--5ddd4595cc--xq587-eth0" May 15 11:57:28.007546 containerd[1874]: 2025-05-15 11:57:27.771 [INFO][4819] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f906bda2ecf8a4660b1549a04fe03d88f9a2671ac18a8809648fc6d876174ac6" HandleID="k8s-pod-network.f906bda2ecf8a4660b1549a04fe03d88f9a2671ac18a8809648fc6d876174ac6" Workload="ci--4334.0.0--a--59732b8df3-k8s-calico--apiserver--5ddd4595cc--xq587-eth0" May 15 11:57:28.007546 containerd[1874]: 2025-05-15 11:57:27.838 [INFO][4819] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f906bda2ecf8a4660b1549a04fe03d88f9a2671ac18a8809648fc6d876174ac6" HandleID="k8s-pod-network.f906bda2ecf8a4660b1549a04fe03d88f9a2671ac18a8809648fc6d876174ac6" Workload="ci--4334.0.0--a--59732b8df3-k8s-calico--apiserver--5ddd4595cc--xq587-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c7d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4334.0.0-a-59732b8df3", "pod":"calico-apiserver-5ddd4595cc-xq587", "timestamp":"2025-05-15 11:57:27.77156692 +0000 UTC"}, Hostname:"ci-4334.0.0-a-59732b8df3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 11:57:28.007546 containerd[1874]: 2025-05-15 11:57:27.842 [INFO][4819] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 11:57:28.007546 containerd[1874]: 2025-05-15 11:57:27.869 [INFO][4819] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 11:57:28.007546 containerd[1874]: 2025-05-15 11:57:27.869 [INFO][4819] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4334.0.0-a-59732b8df3' May 15 11:57:28.007546 containerd[1874]: 2025-05-15 11:57:27.946 [INFO][4819] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f906bda2ecf8a4660b1549a04fe03d88f9a2671ac18a8809648fc6d876174ac6" host="ci-4334.0.0-a-59732b8df3" May 15 11:57:28.007546 containerd[1874]: 2025-05-15 11:57:27.956 [INFO][4819] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4334.0.0-a-59732b8df3" May 15 11:57:28.007546 containerd[1874]: 2025-05-15 11:57:27.964 [INFO][4819] ipam/ipam.go 489: Trying affinity for 192.168.62.192/26 host="ci-4334.0.0-a-59732b8df3" May 15 11:57:28.007546 containerd[1874]: 2025-05-15 11:57:27.966 [INFO][4819] ipam/ipam.go 155: Attempting to load block cidr=192.168.62.192/26 host="ci-4334.0.0-a-59732b8df3" May 15 11:57:28.007546 containerd[1874]: 2025-05-15 11:57:27.969 [INFO][4819] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.62.192/26 host="ci-4334.0.0-a-59732b8df3" May 15 11:57:28.007546 containerd[1874]: 2025-05-15 11:57:27.969 [INFO][4819] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.62.192/26 handle="k8s-pod-network.f906bda2ecf8a4660b1549a04fe03d88f9a2671ac18a8809648fc6d876174ac6" host="ci-4334.0.0-a-59732b8df3" May 15 11:57:28.007546 containerd[1874]: 2025-05-15 11:57:27.971 [INFO][4819] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f906bda2ecf8a4660b1549a04fe03d88f9a2671ac18a8809648fc6d876174ac6 May 15 11:57:28.007546 containerd[1874]: 2025-05-15 11:57:27.975 [INFO][4819] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.62.192/26 handle="k8s-pod-network.f906bda2ecf8a4660b1549a04fe03d88f9a2671ac18a8809648fc6d876174ac6" host="ci-4334.0.0-a-59732b8df3" May 15 11:57:28.007546 containerd[1874]: 2025-05-15 11:57:27.985 [INFO][4819] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.62.196/26] block=192.168.62.192/26 handle="k8s-pod-network.f906bda2ecf8a4660b1549a04fe03d88f9a2671ac18a8809648fc6d876174ac6" host="ci-4334.0.0-a-59732b8df3" May 15 11:57:28.007546 containerd[1874]: 2025-05-15 11:57:27.985 [INFO][4819] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.62.196/26] handle="k8s-pod-network.f906bda2ecf8a4660b1549a04fe03d88f9a2671ac18a8809648fc6d876174ac6" host="ci-4334.0.0-a-59732b8df3" May 15 11:57:28.007546 containerd[1874]: 2025-05-15 11:57:27.985 [INFO][4819] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 11:57:28.007546 containerd[1874]: 2025-05-15 11:57:27.985 [INFO][4819] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.196/26] IPv6=[] ContainerID="f906bda2ecf8a4660b1549a04fe03d88f9a2671ac18a8809648fc6d876174ac6" HandleID="k8s-pod-network.f906bda2ecf8a4660b1549a04fe03d88f9a2671ac18a8809648fc6d876174ac6" Workload="ci--4334.0.0--a--59732b8df3-k8s-calico--apiserver--5ddd4595cc--xq587-eth0" May 15 11:57:28.008374 containerd[1874]: 2025-05-15 11:57:27.987 [INFO][4762] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f906bda2ecf8a4660b1549a04fe03d88f9a2671ac18a8809648fc6d876174ac6" Namespace="calico-apiserver" Pod="calico-apiserver-5ddd4595cc-xq587" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-calico--apiserver--5ddd4595cc--xq587-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--59732b8df3-k8s-calico--apiserver--5ddd4595cc--xq587-eth0", GenerateName:"calico-apiserver-5ddd4595cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"3a542d65-998c-4e93-9f30-3a517d0acb36", ResourceVersion:"676", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 11, 57, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ddd4595cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-59732b8df3", ContainerID:"", Pod:"calico-apiserver-5ddd4595cc-xq587", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic3a74b916b5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 11:57:28.008374 containerd[1874]: 2025-05-15 11:57:27.988 [INFO][4762] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.62.196/32] ContainerID="f906bda2ecf8a4660b1549a04fe03d88f9a2671ac18a8809648fc6d876174ac6" Namespace="calico-apiserver" Pod="calico-apiserver-5ddd4595cc-xq587" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-calico--apiserver--5ddd4595cc--xq587-eth0" May 15 11:57:28.008374 containerd[1874]: 2025-05-15 11:57:27.988 [INFO][4762] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic3a74b916b5 ContainerID="f906bda2ecf8a4660b1549a04fe03d88f9a2671ac18a8809648fc6d876174ac6" Namespace="calico-apiserver" Pod="calico-apiserver-5ddd4595cc-xq587" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-calico--apiserver--5ddd4595cc--xq587-eth0" May 15 11:57:28.008374 containerd[1874]: 2025-05-15 11:57:27.993 [INFO][4762] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f906bda2ecf8a4660b1549a04fe03d88f9a2671ac18a8809648fc6d876174ac6" Namespace="calico-apiserver" Pod="calico-apiserver-5ddd4595cc-xq587" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-calico--apiserver--5ddd4595cc--xq587-eth0" May 15 11:57:28.008374 containerd[1874]: 2025-05-15 11:57:27.993 [INFO][4762] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f906bda2ecf8a4660b1549a04fe03d88f9a2671ac18a8809648fc6d876174ac6" Namespace="calico-apiserver" Pod="calico-apiserver-5ddd4595cc-xq587" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-calico--apiserver--5ddd4595cc--xq587-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--59732b8df3-k8s-calico--apiserver--5ddd4595cc--xq587-eth0", GenerateName:"calico-apiserver-5ddd4595cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"3a542d65-998c-4e93-9f30-3a517d0acb36", ResourceVersion:"676", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 11, 57, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ddd4595cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-59732b8df3", ContainerID:"f906bda2ecf8a4660b1549a04fe03d88f9a2671ac18a8809648fc6d876174ac6", Pod:"calico-apiserver-5ddd4595cc-xq587", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic3a74b916b5", MAC:"5a:da:6c:7f:c0:95", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 11:57:28.008374 containerd[1874]: 2025-05-15 11:57:28.002 [INFO][4762] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f906bda2ecf8a4660b1549a04fe03d88f9a2671ac18a8809648fc6d876174ac6" Namespace="calico-apiserver" Pod="calico-apiserver-5ddd4595cc-xq587" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-calico--apiserver--5ddd4595cc--xq587-eth0" May 15 11:57:28.026301 containerd[1874]: time="2025-05-15T11:57:28.026198474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7rpbh,Uid:b9f8b73d-9f99-48d0-821f-bd88c2524d23,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e28a35c88cefb7be16d9f97f0c8f8a93a6c1e3d753e803c31c0e42eef609bef\"" May 15 11:57:28.028506 containerd[1874]: time="2025-05-15T11:57:28.028408872Z" level=info msg="CreateContainer within sandbox \"1e28a35c88cefb7be16d9f97f0c8f8a93a6c1e3d753e803c31c0e42eef609bef\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 11:57:28.069932 containerd[1874]: time="2025-05-15T11:57:28.069900711Z" level=info msg="connecting to shim f906bda2ecf8a4660b1549a04fe03d88f9a2671ac18a8809648fc6d876174ac6" address="unix:///run/containerd/s/54ec94b83ed71b1e53eee876a87c676e5acdd193f1825d544c4c97019834d7b4" namespace=k8s.io protocol=ttrpc version=3 May 15 11:57:28.075978 containerd[1874]: time="2025-05-15T11:57:28.075946035Z" level=info msg="Container de2c031ab07916748bb55a35b211dc9fe21591676f68f28fe0dc533c04c073de: CDI devices from CRI Config.CDIDevices: []" May 15 11:57:28.092570 systemd[1]: Started cri-containerd-f906bda2ecf8a4660b1549a04fe03d88f9a2671ac18a8809648fc6d876174ac6.scope - libcontainer container f906bda2ecf8a4660b1549a04fe03d88f9a2671ac18a8809648fc6d876174ac6. May 15 11:57:28.096896 containerd[1874]: time="2025-05-15T11:57:28.096476826Z" level=info msg="CreateContainer within sandbox \"1e28a35c88cefb7be16d9f97f0c8f8a93a6c1e3d753e803c31c0e42eef609bef\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"de2c031ab07916748bb55a35b211dc9fe21591676f68f28fe0dc533c04c073de\"" May 15 11:57:28.098568 containerd[1874]: time="2025-05-15T11:57:28.098118346Z" level=info msg="StartContainer for \"de2c031ab07916748bb55a35b211dc9fe21591676f68f28fe0dc533c04c073de\"" May 15 11:57:28.100336 containerd[1874]: time="2025-05-15T11:57:28.100302336Z" level=info msg="connecting to shim de2c031ab07916748bb55a35b211dc9fe21591676f68f28fe0dc533c04c073de" address="unix:///run/containerd/s/f62506213004065a8241a586f8248e31300b7f885ae2933ea542ecb66fccb461" protocol=ttrpc version=3 May 15 11:57:28.103618 systemd-networkd[1606]: cali9df99673fc0: Link UP May 15 11:57:28.104605 systemd-networkd[1606]: cali9df99673fc0: Gained carrier May 15 11:57:28.131744 systemd[1]: Started cri-containerd-de2c031ab07916748bb55a35b211dc9fe21591676f68f28fe0dc533c04c073de.scope - libcontainer container de2c031ab07916748bb55a35b211dc9fe21591676f68f28fe0dc533c04c073de. May 15 11:57:28.134773 containerd[1874]: 2025-05-15 11:57:27.716 [INFO][4773] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4334.0.0--a--59732b8df3-k8s-calico--kube--controllers--96dfc677d--wk2vl-eth0 calico-kube-controllers-96dfc677d- calico-system 69a43828-ae35-4eb0-a0e4-2f4a0dfd7a47 671 0 2025-05-15 11:57:01 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:96dfc677d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4334.0.0-a-59732b8df3 calico-kube-controllers-96dfc677d-wk2vl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9df99673fc0 [] []}} ContainerID="c417b53784abd1b5d6934afeecdd51c739f05d7e844470682fc396fc3bb93e82" Namespace="calico-system" Pod="calico-kube-controllers-96dfc677d-wk2vl" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-calico--kube--controllers--96dfc677d--wk2vl-" May 15 11:57:28.134773 containerd[1874]: 2025-05-15 11:57:27.716 [INFO][4773] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c417b53784abd1b5d6934afeecdd51c739f05d7e844470682fc396fc3bb93e82" Namespace="calico-system" Pod="calico-kube-controllers-96dfc677d-wk2vl" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-calico--kube--controllers--96dfc677d--wk2vl-eth0" May 15 11:57:28.134773 containerd[1874]: 2025-05-15 11:57:27.768 [INFO][4808] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c417b53784abd1b5d6934afeecdd51c739f05d7e844470682fc396fc3bb93e82" HandleID="k8s-pod-network.c417b53784abd1b5d6934afeecdd51c739f05d7e844470682fc396fc3bb93e82" Workload="ci--4334.0.0--a--59732b8df3-k8s-calico--kube--controllers--96dfc677d--wk2vl-eth0" May 15 11:57:28.134773 containerd[1874]: 2025-05-15 11:57:27.838 [INFO][4808] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c417b53784abd1b5d6934afeecdd51c739f05d7e844470682fc396fc3bb93e82" HandleID="k8s-pod-network.c417b53784abd1b5d6934afeecdd51c739f05d7e844470682fc396fc3bb93e82" Workload="ci--4334.0.0--a--59732b8df3-k8s-calico--kube--controllers--96dfc677d--wk2vl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000319750), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4334.0.0-a-59732b8df3", "pod":"calico-kube-controllers-96dfc677d-wk2vl", "timestamp":"2025-05-15 11:57:27.768259031 +0000 UTC"}, Hostname:"ci-4334.0.0-a-59732b8df3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 11:57:28.134773 containerd[1874]: 2025-05-15 11:57:27.842 [INFO][4808] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 11:57:28.134773 containerd[1874]: 2025-05-15 11:57:27.985 [INFO][4808] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 11:57:28.134773 containerd[1874]: 2025-05-15 11:57:27.985 [INFO][4808] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4334.0.0-a-59732b8df3' May 15 11:57:28.134773 containerd[1874]: 2025-05-15 11:57:28.047 [INFO][4808] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c417b53784abd1b5d6934afeecdd51c739f05d7e844470682fc396fc3bb93e82" host="ci-4334.0.0-a-59732b8df3" May 15 11:57:28.134773 containerd[1874]: 2025-05-15 11:57:28.056 [INFO][4808] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4334.0.0-a-59732b8df3" May 15 11:57:28.134773 containerd[1874]: 2025-05-15 11:57:28.063 [INFO][4808] ipam/ipam.go 489: Trying affinity for 192.168.62.192/26 host="ci-4334.0.0-a-59732b8df3" May 15 11:57:28.134773 containerd[1874]: 2025-05-15 11:57:28.065 [INFO][4808] ipam/ipam.go 155: Attempting to load block cidr=192.168.62.192/26 host="ci-4334.0.0-a-59732b8df3" May 15 11:57:28.134773 containerd[1874]: 2025-05-15 11:57:28.076 [INFO][4808] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.62.192/26 host="ci-4334.0.0-a-59732b8df3" May 15 11:57:28.134773 containerd[1874]: 2025-05-15 11:57:28.076 [INFO][4808] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.62.192/26 handle="k8s-pod-network.c417b53784abd1b5d6934afeecdd51c739f05d7e844470682fc396fc3bb93e82" host="ci-4334.0.0-a-59732b8df3" May 15 11:57:28.134773 containerd[1874]: 2025-05-15 11:57:28.078 [INFO][4808] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c417b53784abd1b5d6934afeecdd51c739f05d7e844470682fc396fc3bb93e82 May 15 11:57:28.134773 containerd[1874]: 2025-05-15 11:57:28.083 [INFO][4808] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.62.192/26 handle="k8s-pod-network.c417b53784abd1b5d6934afeecdd51c739f05d7e844470682fc396fc3bb93e82" host="ci-4334.0.0-a-59732b8df3" May 15 11:57:28.134773 containerd[1874]: 2025-05-15 11:57:28.092 [INFO][4808] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.62.197/26] block=192.168.62.192/26 handle="k8s-pod-network.c417b53784abd1b5d6934afeecdd51c739f05d7e844470682fc396fc3bb93e82" host="ci-4334.0.0-a-59732b8df3" May 15 11:57:28.134773 containerd[1874]: 2025-05-15 11:57:28.092 [INFO][4808] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.62.197/26] handle="k8s-pod-network.c417b53784abd1b5d6934afeecdd51c739f05d7e844470682fc396fc3bb93e82" host="ci-4334.0.0-a-59732b8df3" May 15 11:57:28.134773 containerd[1874]: 2025-05-15 11:57:28.092 [INFO][4808] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 11:57:28.134773 containerd[1874]: 2025-05-15 11:57:28.092 [INFO][4808] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.197/26] IPv6=[] ContainerID="c417b53784abd1b5d6934afeecdd51c739f05d7e844470682fc396fc3bb93e82" HandleID="k8s-pod-network.c417b53784abd1b5d6934afeecdd51c739f05d7e844470682fc396fc3bb93e82" Workload="ci--4334.0.0--a--59732b8df3-k8s-calico--kube--controllers--96dfc677d--wk2vl-eth0" May 15 11:57:28.135859 containerd[1874]: 2025-05-15 11:57:28.098 [INFO][4773] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c417b53784abd1b5d6934afeecdd51c739f05d7e844470682fc396fc3bb93e82" Namespace="calico-system" Pod="calico-kube-controllers-96dfc677d-wk2vl" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-calico--kube--controllers--96dfc677d--wk2vl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--59732b8df3-k8s-calico--kube--controllers--96dfc677d--wk2vl-eth0", GenerateName:"calico-kube-controllers-96dfc677d-", Namespace:"calico-system", SelfLink:"", UID:"69a43828-ae35-4eb0-a0e4-2f4a0dfd7a47", ResourceVersion:"671", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 11, 57, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"96dfc677d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-59732b8df3", ContainerID:"", Pod:"calico-kube-controllers-96dfc677d-wk2vl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.62.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9df99673fc0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 11:57:28.135859 containerd[1874]: 2025-05-15 11:57:28.099 [INFO][4773] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.62.197/32] ContainerID="c417b53784abd1b5d6934afeecdd51c739f05d7e844470682fc396fc3bb93e82" Namespace="calico-system" Pod="calico-kube-controllers-96dfc677d-wk2vl" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-calico--kube--controllers--96dfc677d--wk2vl-eth0" May 15 11:57:28.135859 containerd[1874]: 2025-05-15 11:57:28.099 [INFO][4773] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9df99673fc0 ContainerID="c417b53784abd1b5d6934afeecdd51c739f05d7e844470682fc396fc3bb93e82" Namespace="calico-system" Pod="calico-kube-controllers-96dfc677d-wk2vl" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-calico--kube--controllers--96dfc677d--wk2vl-eth0" May 15 11:57:28.135859 containerd[1874]: 2025-05-15 11:57:28.105 [INFO][4773] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c417b53784abd1b5d6934afeecdd51c739f05d7e844470682fc396fc3bb93e82" Namespace="calico-system" Pod="calico-kube-controllers-96dfc677d-wk2vl" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-calico--kube--controllers--96dfc677d--wk2vl-eth0" May 15 11:57:28.135859 containerd[1874]: 2025-05-15 11:57:28.106 [INFO][4773] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c417b53784abd1b5d6934afeecdd51c739f05d7e844470682fc396fc3bb93e82" Namespace="calico-system" Pod="calico-kube-controllers-96dfc677d-wk2vl" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-calico--kube--controllers--96dfc677d--wk2vl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--59732b8df3-k8s-calico--kube--controllers--96dfc677d--wk2vl-eth0", GenerateName:"calico-kube-controllers-96dfc677d-", Namespace:"calico-system", SelfLink:"", UID:"69a43828-ae35-4eb0-a0e4-2f4a0dfd7a47", ResourceVersion:"671", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 11, 57, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"96dfc677d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-59732b8df3", ContainerID:"c417b53784abd1b5d6934afeecdd51c739f05d7e844470682fc396fc3bb93e82", Pod:"calico-kube-controllers-96dfc677d-wk2vl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.62.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9df99673fc0", MAC:"06:11:2f:7c:61:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 11:57:28.135859 containerd[1874]: 2025-05-15 11:57:28.124 [INFO][4773] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c417b53784abd1b5d6934afeecdd51c739f05d7e844470682fc396fc3bb93e82" Namespace="calico-system" Pod="calico-kube-controllers-96dfc677d-wk2vl" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-calico--kube--controllers--96dfc677d--wk2vl-eth0" May 15 11:57:28.168178 containerd[1874]: time="2025-05-15T11:57:28.168073699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddd4595cc-xq587,Uid:3a542d65-998c-4e93-9f30-3a517d0acb36,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f906bda2ecf8a4660b1549a04fe03d88f9a2671ac18a8809648fc6d876174ac6\"" May 15 11:57:28.169626 containerd[1874]: time="2025-05-15T11:57:28.169367298Z" level=info msg="StartContainer for \"de2c031ab07916748bb55a35b211dc9fe21591676f68f28fe0dc533c04c073de\" returns successfully" May 15 11:57:28.192267 containerd[1874]: time="2025-05-15T11:57:28.192234162Z" level=info msg="connecting to shim c417b53784abd1b5d6934afeecdd51c739f05d7e844470682fc396fc3bb93e82" address="unix:///run/containerd/s/a1b0aec5ac547677c52dbe19ee175ec743ca16b036284122b4d440078320da69" namespace=k8s.io protocol=ttrpc version=3 May 15 11:57:28.212559 systemd[1]: Started cri-containerd-c417b53784abd1b5d6934afeecdd51c739f05d7e844470682fc396fc3bb93e82.scope - libcontainer container c417b53784abd1b5d6934afeecdd51c739f05d7e844470682fc396fc3bb93e82. May 15 11:57:28.242047 containerd[1874]: time="2025-05-15T11:57:28.241981164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-96dfc677d-wk2vl,Uid:69a43828-ae35-4eb0-a0e4-2f4a0dfd7a47,Namespace:calico-system,Attempt:0,} returns sandbox id \"c417b53784abd1b5d6934afeecdd51c739f05d7e844470682fc396fc3bb93e82\"" May 15 11:57:28.748510 kubelet[3279]: I0515 11:57:28.748077 3279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7rpbh" podStartSLOduration=34.748061922 podStartE2EDuration="34.748061922s" podCreationTimestamp="2025-05-15 11:56:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 11:57:28.747547973 +0000 UTC m=+40.195378375" watchObservedRunningTime="2025-05-15 11:57:28.748061922 +0000 UTC m=+40.195892332" May 15 11:57:28.972595 systemd-networkd[1606]: cali61419e5d0d7: Gained IPv6LL May 15 11:57:29.484648 systemd-networkd[1606]: cali9df99673fc0: Gained IPv6LL May 15 11:57:29.548546 systemd-networkd[1606]: cali5ee1dc0a185: Gained IPv6LL May 15 11:57:29.676629 systemd-networkd[1606]: calic3a74b916b5: Gained IPv6LL May 15 11:57:30.624205 containerd[1874]: time="2025-05-15T11:57:30.623852619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rvch9,Uid:db9a440f-784b-44cc-94cc-545a3468a789,Namespace:calico-system,Attempt:0,}" May 15 11:57:30.825311 containerd[1874]: time="2025-05-15T11:57:30.825219277Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:57:30.827916 containerd[1874]: time="2025-05-15T11:57:30.827811461Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 15 11:57:30.832068 containerd[1874]: time="2025-05-15T11:57:30.832041948Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:57:30.837797 containerd[1874]: time="2025-05-15T11:57:30.837538187Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:57:30.838081 containerd[1874]: time="2025-05-15T11:57:30.838040831Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 2.882073221s" May 15 11:57:30.838181 containerd[1874]: time="2025-05-15T11:57:30.838167586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 15 11:57:30.839902 containerd[1874]: time="2025-05-15T11:57:30.839852011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 15 11:57:30.841240 containerd[1874]: time="2025-05-15T11:57:30.841167116Z" level=info msg="CreateContainer within sandbox \"49705ab9aad5630b9ef1380f1163180ba703fc842475209ed75b0d46d2dd99eb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 15 11:57:30.860596 systemd-networkd[1606]: calibd1570b17d9: Link UP May 15 11:57:30.861719 systemd-networkd[1606]: calibd1570b17d9: Gained carrier May 15 11:57:30.875372 containerd[1874]: time="2025-05-15T11:57:30.875246878Z" level=info msg="Container d6402df99ab227ec6e05f9b26e36cb4c654c7335b5066f6ce7e05816bd52a1ea: CDI devices from CRI Config.CDIDevices: []" May 15 11:57:30.878370 containerd[1874]: 2025-05-15 11:57:30.804 [INFO][5117] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4334.0.0--a--59732b8df3-k8s-csi--node--driver--rvch9-eth0 csi-node-driver- calico-system db9a440f-784b-44cc-94cc-545a3468a789 580 0 2025-05-15 11:57:01 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5b5cc68cd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4334.0.0-a-59732b8df3 csi-node-driver-rvch9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibd1570b17d9 [] []}} ContainerID="95b799fe3b88c3a3c3379f597e308cfe2872d6ce8193a0e08163fbcb73285079" Namespace="calico-system" Pod="csi-node-driver-rvch9" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-csi--node--driver--rvch9-" May 15 11:57:30.878370 containerd[1874]: 2025-05-15 11:57:30.804 [INFO][5117] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="95b799fe3b88c3a3c3379f597e308cfe2872d6ce8193a0e08163fbcb73285079" Namespace="calico-system" Pod="csi-node-driver-rvch9" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-csi--node--driver--rvch9-eth0" May 15 11:57:30.878370 containerd[1874]: 2025-05-15 11:57:30.822 [INFO][5134] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="95b799fe3b88c3a3c3379f597e308cfe2872d6ce8193a0e08163fbcb73285079" HandleID="k8s-pod-network.95b799fe3b88c3a3c3379f597e308cfe2872d6ce8193a0e08163fbcb73285079" Workload="ci--4334.0.0--a--59732b8df3-k8s-csi--node--driver--rvch9-eth0" May 15 11:57:30.878370 containerd[1874]: 2025-05-15 11:57:30.829 [INFO][5134] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="95b799fe3b88c3a3c3379f597e308cfe2872d6ce8193a0e08163fbcb73285079" HandleID="k8s-pod-network.95b799fe3b88c3a3c3379f597e308cfe2872d6ce8193a0e08163fbcb73285079" Workload="ci--4334.0.0--a--59732b8df3-k8s-csi--node--driver--rvch9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028cb20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4334.0.0-a-59732b8df3", "pod":"csi-node-driver-rvch9", "timestamp":"2025-05-15 11:57:30.822318798 +0000 UTC"}, Hostname:"ci-4334.0.0-a-59732b8df3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 11:57:30.878370 containerd[1874]: 2025-05-15 11:57:30.829 [INFO][5134] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 11:57:30.878370 containerd[1874]: 2025-05-15 11:57:30.829 [INFO][5134] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 11:57:30.878370 containerd[1874]: 2025-05-15 11:57:30.829 [INFO][5134] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4334.0.0-a-59732b8df3' May 15 11:57:30.878370 containerd[1874]: 2025-05-15 11:57:30.831 [INFO][5134] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.95b799fe3b88c3a3c3379f597e308cfe2872d6ce8193a0e08163fbcb73285079" host="ci-4334.0.0-a-59732b8df3" May 15 11:57:30.878370 containerd[1874]: 2025-05-15 11:57:30.834 [INFO][5134] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4334.0.0-a-59732b8df3" May 15 11:57:30.878370 containerd[1874]: 2025-05-15 11:57:30.837 [INFO][5134] ipam/ipam.go 489: Trying affinity for 192.168.62.192/26 host="ci-4334.0.0-a-59732b8df3" May 15 11:57:30.878370 containerd[1874]: 2025-05-15 11:57:30.839 [INFO][5134] ipam/ipam.go 155: Attempting to load block cidr=192.168.62.192/26 host="ci-4334.0.0-a-59732b8df3" May 15 11:57:30.878370 containerd[1874]: 2025-05-15 11:57:30.841 [INFO][5134] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.62.192/26 host="ci-4334.0.0-a-59732b8df3" May 15 11:57:30.878370 containerd[1874]: 2025-05-15 11:57:30.841 [INFO][5134] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.62.192/26 handle="k8s-pod-network.95b799fe3b88c3a3c3379f597e308cfe2872d6ce8193a0e08163fbcb73285079" host="ci-4334.0.0-a-59732b8df3" May 15 11:57:30.878370 containerd[1874]: 2025-05-15 11:57:30.843 [INFO][5134] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.95b799fe3b88c3a3c3379f597e308cfe2872d6ce8193a0e08163fbcb73285079 May 15 11:57:30.878370 containerd[1874]: 2025-05-15 11:57:30.847 [INFO][5134] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.62.192/26 handle="k8s-pod-network.95b799fe3b88c3a3c3379f597e308cfe2872d6ce8193a0e08163fbcb73285079" host="ci-4334.0.0-a-59732b8df3" May 15 11:57:30.878370 containerd[1874]: 2025-05-15 11:57:30.856 [INFO][5134] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.62.198/26] block=192.168.62.192/26 handle="k8s-pod-network.95b799fe3b88c3a3c3379f597e308cfe2872d6ce8193a0e08163fbcb73285079" host="ci-4334.0.0-a-59732b8df3" May 15 11:57:30.878370 containerd[1874]: 2025-05-15 11:57:30.856 [INFO][5134] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.62.198/26] handle="k8s-pod-network.95b799fe3b88c3a3c3379f597e308cfe2872d6ce8193a0e08163fbcb73285079" host="ci-4334.0.0-a-59732b8df3" May 15 11:57:30.878370 containerd[1874]: 2025-05-15 11:57:30.856 [INFO][5134] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 11:57:30.878370 containerd[1874]: 2025-05-15 11:57:30.856 [INFO][5134] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.198/26] IPv6=[] ContainerID="95b799fe3b88c3a3c3379f597e308cfe2872d6ce8193a0e08163fbcb73285079" HandleID="k8s-pod-network.95b799fe3b88c3a3c3379f597e308cfe2872d6ce8193a0e08163fbcb73285079" Workload="ci--4334.0.0--a--59732b8df3-k8s-csi--node--driver--rvch9-eth0" May 15 11:57:30.879536 containerd[1874]: 2025-05-15 11:57:30.858 [INFO][5117] cni-plugin/k8s.go 386: Populated endpoint ContainerID="95b799fe3b88c3a3c3379f597e308cfe2872d6ce8193a0e08163fbcb73285079" Namespace="calico-system" Pod="csi-node-driver-rvch9" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-csi--node--driver--rvch9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--59732b8df3-k8s-csi--node--driver--rvch9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"db9a440f-784b-44cc-94cc-545a3468a789", ResourceVersion:"580", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 11, 57, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-59732b8df3", ContainerID:"", Pod:"csi-node-driver-rvch9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.62.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibd1570b17d9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 11:57:30.879536 containerd[1874]: 2025-05-15 11:57:30.858 [INFO][5117] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.62.198/32] ContainerID="95b799fe3b88c3a3c3379f597e308cfe2872d6ce8193a0e08163fbcb73285079" Namespace="calico-system" Pod="csi-node-driver-rvch9" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-csi--node--driver--rvch9-eth0" May 15 11:57:30.879536 containerd[1874]: 2025-05-15 11:57:30.858 [INFO][5117] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibd1570b17d9 ContainerID="95b799fe3b88c3a3c3379f597e308cfe2872d6ce8193a0e08163fbcb73285079" Namespace="calico-system" Pod="csi-node-driver-rvch9" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-csi--node--driver--rvch9-eth0" May 15 11:57:30.879536 containerd[1874]: 2025-05-15 11:57:30.861 [INFO][5117] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="95b799fe3b88c3a3c3379f597e308cfe2872d6ce8193a0e08163fbcb73285079" Namespace="calico-system" Pod="csi-node-driver-rvch9" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-csi--node--driver--rvch9-eth0" May 15 11:57:30.879536 containerd[1874]: 2025-05-15 11:57:30.861 [INFO][5117] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="95b799fe3b88c3a3c3379f597e308cfe2872d6ce8193a0e08163fbcb73285079" Namespace="calico-system" Pod="csi-node-driver-rvch9" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-csi--node--driver--rvch9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--59732b8df3-k8s-csi--node--driver--rvch9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"db9a440f-784b-44cc-94cc-545a3468a789", ResourceVersion:"580", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 11, 57, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-59732b8df3", ContainerID:"95b799fe3b88c3a3c3379f597e308cfe2872d6ce8193a0e08163fbcb73285079", Pod:"csi-node-driver-rvch9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.62.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibd1570b17d9", MAC:"62:6c:b1:e7:80:38", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 11:57:30.879536 containerd[1874]: 2025-05-15 11:57:30.877 [INFO][5117] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="95b799fe3b88c3a3c3379f597e308cfe2872d6ce8193a0e08163fbcb73285079" Namespace="calico-system" Pod="csi-node-driver-rvch9" WorkloadEndpoint="ci--4334.0.0--a--59732b8df3-k8s-csi--node--driver--rvch9-eth0" May 15 11:57:30.896427 containerd[1874]: time="2025-05-15T11:57:30.896375619Z" level=info msg="CreateContainer within sandbox \"49705ab9aad5630b9ef1380f1163180ba703fc842475209ed75b0d46d2dd99eb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d6402df99ab227ec6e05f9b26e36cb4c654c7335b5066f6ce7e05816bd52a1ea\"" May 15 11:57:30.896921 containerd[1874]: time="2025-05-15T11:57:30.896904816Z" level=info msg="StartContainer for \"d6402df99ab227ec6e05f9b26e36cb4c654c7335b5066f6ce7e05816bd52a1ea\"" May 15 11:57:30.898184 containerd[1874]: time="2025-05-15T11:57:30.898142342Z" level=info msg="connecting to shim d6402df99ab227ec6e05f9b26e36cb4c654c7335b5066f6ce7e05816bd52a1ea" address="unix:///run/containerd/s/b22aa5c21dfe4f5fd6b44f48edf067a064f0b7b75af5fbd4da6906e5ae8da46d" protocol=ttrpc version=3 May 15 11:57:30.916567 systemd[1]: Started cri-containerd-d6402df99ab227ec6e05f9b26e36cb4c654c7335b5066f6ce7e05816bd52a1ea.scope - libcontainer container d6402df99ab227ec6e05f9b26e36cb4c654c7335b5066f6ce7e05816bd52a1ea. May 15 11:57:30.932178 containerd[1874]: time="2025-05-15T11:57:30.931562416Z" level=info msg="connecting to shim 95b799fe3b88c3a3c3379f597e308cfe2872d6ce8193a0e08163fbcb73285079" address="unix:///run/containerd/s/069aa59bea5158dfb7cf9ccde1ebc5caca681f8c1c515afaee29f09b7ef79e36" namespace=k8s.io protocol=ttrpc version=3 May 15 11:57:30.953591 systemd[1]: Started cri-containerd-95b799fe3b88c3a3c3379f597e308cfe2872d6ce8193a0e08163fbcb73285079.scope - libcontainer container 95b799fe3b88c3a3c3379f597e308cfe2872d6ce8193a0e08163fbcb73285079. May 15 11:57:30.961032 containerd[1874]: time="2025-05-15T11:57:30.960970224Z" level=info msg="StartContainer for \"d6402df99ab227ec6e05f9b26e36cb4c654c7335b5066f6ce7e05816bd52a1ea\" returns successfully" May 15 11:57:30.980155 containerd[1874]: time="2025-05-15T11:57:30.980127117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rvch9,Uid:db9a440f-784b-44cc-94cc-545a3468a789,Namespace:calico-system,Attempt:0,} returns sandbox id \"95b799fe3b88c3a3c3379f597e308cfe2872d6ce8193a0e08163fbcb73285079\"" May 15 11:57:31.174051 containerd[1874]: time="2025-05-15T11:57:31.173499283Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:57:31.176920 containerd[1874]: time="2025-05-15T11:57:31.176893718Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 15 11:57:31.177666 containerd[1874]: time="2025-05-15T11:57:31.177642713Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 336.908272ms" May 15 11:57:31.177708 containerd[1874]: time="2025-05-15T11:57:31.177670345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 15 11:57:31.179034 containerd[1874]: time="2025-05-15T11:57:31.179013194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 15 11:57:31.180675 containerd[1874]: time="2025-05-15T11:57:31.180651602Z" level=info msg="CreateContainer within sandbox \"f906bda2ecf8a4660b1549a04fe03d88f9a2671ac18a8809648fc6d876174ac6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 15 11:57:31.208899 containerd[1874]: time="2025-05-15T11:57:31.208864669Z" level=info msg="Container 5b78f155012814dbc295a407865e5e717f9fe735445e7c19405eb228f6a4c754: CDI devices from CRI Config.CDIDevices: []" May 15 11:57:31.226788 containerd[1874]: time="2025-05-15T11:57:31.226758411Z" level=info msg="CreateContainer within sandbox \"f906bda2ecf8a4660b1549a04fe03d88f9a2671ac18a8809648fc6d876174ac6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5b78f155012814dbc295a407865e5e717f9fe735445e7c19405eb228f6a4c754\"" May 15 11:57:31.227161 containerd[1874]: time="2025-05-15T11:57:31.227096563Z" level=info msg="StartContainer for \"5b78f155012814dbc295a407865e5e717f9fe735445e7c19405eb228f6a4c754\"" May 15 11:57:31.228751 containerd[1874]: time="2025-05-15T11:57:31.228714387Z" level=info msg="connecting to shim 5b78f155012814dbc295a407865e5e717f9fe735445e7c19405eb228f6a4c754" address="unix:///run/containerd/s/54ec94b83ed71b1e53eee876a87c676e5acdd193f1825d544c4c97019834d7b4" protocol=ttrpc version=3 May 15 11:57:31.243546 systemd[1]: Started cri-containerd-5b78f155012814dbc295a407865e5e717f9fe735445e7c19405eb228f6a4c754.scope - libcontainer container 5b78f155012814dbc295a407865e5e717f9fe735445e7c19405eb228f6a4c754. May 15 11:57:31.280842 containerd[1874]: time="2025-05-15T11:57:31.280818111Z" level=info msg="StartContainer for \"5b78f155012814dbc295a407865e5e717f9fe735445e7c19405eb228f6a4c754\" returns successfully" May 15 11:57:31.758197 kubelet[3279]: I0515 11:57:31.758136 3279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5ddd4595cc-xmn59" podStartSLOduration=27.872846161 podStartE2EDuration="30.758122092s" podCreationTimestamp="2025-05-15 11:57:01 +0000 UTC" firstStartedPulling="2025-05-15 11:57:27.954079628 +0000 UTC m=+39.401910030" lastFinishedPulling="2025-05-15 11:57:30.839355551 +0000 UTC m=+42.287185961" observedRunningTime="2025-05-15 11:57:31.756840653 +0000 UTC m=+43.204671055" watchObservedRunningTime="2025-05-15 11:57:31.758122092 +0000 UTC m=+43.205952494" May 15 11:57:31.922176 kubelet[3279]: I0515 11:57:31.922114 3279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5ddd4595cc-xq587" podStartSLOduration=27.914771075 podStartE2EDuration="30.922096882s" podCreationTimestamp="2025-05-15 11:57:01 +0000 UTC" firstStartedPulling="2025-05-15 11:57:28.171124166 +0000 UTC m=+39.618954568" lastFinishedPulling="2025-05-15 11:57:31.178449949 +0000 UTC m=+42.626280375" observedRunningTime="2025-05-15 11:57:31.770234277 +0000 UTC m=+43.218064687" watchObservedRunningTime="2025-05-15 11:57:31.922096882 +0000 UTC m=+43.369927284" May 15 11:57:32.045566 systemd-networkd[1606]: calibd1570b17d9: Gained IPv6LL May 15 11:57:32.754228 kubelet[3279]: I0515 11:57:32.754095 3279 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 11:57:33.014955 containerd[1874]: time="2025-05-15T11:57:33.014800554Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:57:33.016697 containerd[1874]: time="2025-05-15T11:57:33.016673471Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 15 11:57:33.019877 containerd[1874]: time="2025-05-15T11:57:33.019842253Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:57:33.024506 containerd[1874]: time="2025-05-15T11:57:33.024467214Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:57:33.024922 containerd[1874]: time="2025-05-15T11:57:33.024826439Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 1.845786364s" May 15 11:57:33.024922 containerd[1874]: time="2025-05-15T11:57:33.024850544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 15 11:57:33.026088 containerd[1874]: time="2025-05-15T11:57:33.026041773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 15 11:57:33.036754 containerd[1874]: time="2025-05-15T11:57:33.036728746Z" level=info msg="CreateContainer within sandbox \"c417b53784abd1b5d6934afeecdd51c739f05d7e844470682fc396fc3bb93e82\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 15 11:57:33.076670 containerd[1874]: time="2025-05-15T11:57:33.076633867Z" level=info msg="Container bcbf402eac03a23df6a4243b08848cc2b567575ae524689ad71498f937d31e23: CDI devices from CRI Config.CDIDevices: []" May 15 11:57:33.109136 containerd[1874]: time="2025-05-15T11:57:33.109075854Z" level=info msg="CreateContainer within sandbox \"c417b53784abd1b5d6934afeecdd51c739f05d7e844470682fc396fc3bb93e82\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"bcbf402eac03a23df6a4243b08848cc2b567575ae524689ad71498f937d31e23\"" May 15 11:57:33.109682 containerd[1874]: time="2025-05-15T11:57:33.109653140Z" level=info msg="StartContainer for \"bcbf402eac03a23df6a4243b08848cc2b567575ae524689ad71498f937d31e23\"" May 15 11:57:33.110389 containerd[1874]: time="2025-05-15T11:57:33.110360573Z" level=info msg="connecting to shim bcbf402eac03a23df6a4243b08848cc2b567575ae524689ad71498f937d31e23" address="unix:///run/containerd/s/a1b0aec5ac547677c52dbe19ee175ec743ca16b036284122b4d440078320da69" protocol=ttrpc version=3 May 15 11:57:33.130569 systemd[1]: Started cri-containerd-bcbf402eac03a23df6a4243b08848cc2b567575ae524689ad71498f937d31e23.scope - libcontainer container bcbf402eac03a23df6a4243b08848cc2b567575ae524689ad71498f937d31e23. May 15 11:57:33.170749 containerd[1874]: time="2025-05-15T11:57:33.170725515Z" level=info msg="StartContainer for \"bcbf402eac03a23df6a4243b08848cc2b567575ae524689ad71498f937d31e23\" returns successfully" May 15 11:57:33.773196 kubelet[3279]: I0515 11:57:33.773140 3279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-96dfc677d-wk2vl" podStartSLOduration=27.99041144 podStartE2EDuration="32.773125831s" podCreationTimestamp="2025-05-15 11:57:01 +0000 UTC" firstStartedPulling="2025-05-15 11:57:28.242949292 +0000 UTC m=+39.690779694" lastFinishedPulling="2025-05-15 11:57:33.025663683 +0000 UTC m=+44.473494085" observedRunningTime="2025-05-15 11:57:33.772906145 +0000 UTC m=+45.220736547" watchObservedRunningTime="2025-05-15 11:57:33.773125831 +0000 UTC m=+45.220956233" May 15 11:57:33.784691 containerd[1874]: time="2025-05-15T11:57:33.784550742Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bcbf402eac03a23df6a4243b08848cc2b567575ae524689ad71498f937d31e23\" id:\"5d3a6f876e2aef0b98df49f543fd80bfdd4015fe5bac195a831462c161df69f4\" pid:5322 exited_at:{seconds:1747310253 nanos:784297096}" May 15 11:57:34.334090 containerd[1874]: time="2025-05-15T11:57:34.334044587Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:57:34.336037 containerd[1874]: time="2025-05-15T11:57:34.336002539Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 15 11:57:34.338991 containerd[1874]: time="2025-05-15T11:57:34.338950891Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:57:34.342601 containerd[1874]: time="2025-05-15T11:57:34.342553483Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:57:34.343116 containerd[1874]: time="2025-05-15T11:57:34.342853970Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 1.316791485s" May 15 11:57:34.343116 containerd[1874]: time="2025-05-15T11:57:34.342882019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 15 11:57:34.345239 containerd[1874]: time="2025-05-15T11:57:34.345211836Z" level=info msg="CreateContainer within sandbox \"95b799fe3b88c3a3c3379f597e308cfe2872d6ce8193a0e08163fbcb73285079\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 15 11:57:34.384157 containerd[1874]: time="2025-05-15T11:57:34.384035235Z" level=info msg="Container b131e94a9651c59ade19cdcbf912974e788650ed0125537a4d68686f1aea0903: CDI devices from CRI Config.CDIDevices: []" May 15 11:57:34.409996 containerd[1874]: time="2025-05-15T11:57:34.409967334Z" level=info msg="CreateContainer within sandbox \"95b799fe3b88c3a3c3379f597e308cfe2872d6ce8193a0e08163fbcb73285079\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b131e94a9651c59ade19cdcbf912974e788650ed0125537a4d68686f1aea0903\"" May 15 11:57:34.411142 containerd[1874]: time="2025-05-15T11:57:34.411040240Z" level=info msg="StartContainer for \"b131e94a9651c59ade19cdcbf912974e788650ed0125537a4d68686f1aea0903\"" May 15 11:57:34.412337 containerd[1874]: time="2025-05-15T11:57:34.412317863Z" level=info msg="connecting to shim b131e94a9651c59ade19cdcbf912974e788650ed0125537a4d68686f1aea0903" address="unix:///run/containerd/s/069aa59bea5158dfb7cf9ccde1ebc5caca681f8c1c515afaee29f09b7ef79e36" protocol=ttrpc version=3 May 15 11:57:34.430549 systemd[1]: Started cri-containerd-b131e94a9651c59ade19cdcbf912974e788650ed0125537a4d68686f1aea0903.scope - libcontainer container b131e94a9651c59ade19cdcbf912974e788650ed0125537a4d68686f1aea0903. May 15 11:57:34.460448 containerd[1874]: time="2025-05-15T11:57:34.460395584Z" level=info msg="StartContainer for \"b131e94a9651c59ade19cdcbf912974e788650ed0125537a4d68686f1aea0903\" returns successfully" May 15 11:57:34.461782 containerd[1874]: time="2025-05-15T11:57:34.461762306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 15 11:57:35.743628 containerd[1874]: time="2025-05-15T11:57:35.743582763Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:57:35.746151 containerd[1874]: time="2025-05-15T11:57:35.746127785Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 15 11:57:35.749669 containerd[1874]: time="2025-05-15T11:57:35.749647448Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:57:35.753516 containerd[1874]: time="2025-05-15T11:57:35.753485958Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 11:57:35.755070 containerd[1874]: time="2025-05-15T11:57:35.754947602Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 1.293109103s" May 15 11:57:35.755070 containerd[1874]: time="2025-05-15T11:57:35.754979763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 15 11:57:35.757962 containerd[1874]: time="2025-05-15T11:57:35.757004813Z" level=info msg="CreateContainer within sandbox \"95b799fe3b88c3a3c3379f597e308cfe2872d6ce8193a0e08163fbcb73285079\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 15 11:57:35.783478 containerd[1874]: time="2025-05-15T11:57:35.781135622Z" level=info msg="Container dc2f2fc036fbbdf456e9db64fdc6eaf17b139caa8db21f212dbf1ca5e0a0aacb: CDI devices from CRI Config.CDIDevices: []" May 15 11:57:35.798334 containerd[1874]: time="2025-05-15T11:57:35.798293148Z" level=info msg="CreateContainer within sandbox \"95b799fe3b88c3a3c3379f597e308cfe2872d6ce8193a0e08163fbcb73285079\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"dc2f2fc036fbbdf456e9db64fdc6eaf17b139caa8db21f212dbf1ca5e0a0aacb\"" May 15 11:57:35.798855 containerd[1874]: time="2025-05-15T11:57:35.798823098Z" level=info msg="StartContainer for \"dc2f2fc036fbbdf456e9db64fdc6eaf17b139caa8db21f212dbf1ca5e0a0aacb\"" May 15 11:57:35.800198 containerd[1874]: time="2025-05-15T11:57:35.800128746Z" level=info msg="connecting to shim dc2f2fc036fbbdf456e9db64fdc6eaf17b139caa8db21f212dbf1ca5e0a0aacb" address="unix:///run/containerd/s/069aa59bea5158dfb7cf9ccde1ebc5caca681f8c1c515afaee29f09b7ef79e36" protocol=ttrpc version=3 May 15 11:57:35.820550 systemd[1]: Started cri-containerd-dc2f2fc036fbbdf456e9db64fdc6eaf17b139caa8db21f212dbf1ca5e0a0aacb.scope - libcontainer container dc2f2fc036fbbdf456e9db64fdc6eaf17b139caa8db21f212dbf1ca5e0a0aacb. May 15 11:57:35.850196 containerd[1874]: time="2025-05-15T11:57:35.850121656Z" level=info msg="StartContainer for \"dc2f2fc036fbbdf456e9db64fdc6eaf17b139caa8db21f212dbf1ca5e0a0aacb\" returns successfully" May 15 11:57:36.699142 kubelet[3279]: I0515 11:57:36.699108 3279 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 15 11:57:36.699142 kubelet[3279]: I0515 11:57:36.699143 3279 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 15 11:57:36.788853 kubelet[3279]: I0515 11:57:36.788677 3279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rvch9" podStartSLOduration=31.01415842 podStartE2EDuration="35.788662873s" podCreationTimestamp="2025-05-15 11:57:01 +0000 UTC" firstStartedPulling="2025-05-15 11:57:30.981043532 +0000 UTC m=+42.428873942" lastFinishedPulling="2025-05-15 11:57:35.755547985 +0000 UTC m=+47.203378395" observedRunningTime="2025-05-15 11:57:36.787823676 +0000 UTC m=+48.235654102" watchObservedRunningTime="2025-05-15 11:57:36.788662873 +0000 UTC m=+48.236493275" May 15 11:57:46.143010 kubelet[3279]: I0515 11:57:46.142951 3279 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 11:57:54.830294 containerd[1874]: time="2025-05-15T11:57:54.830257044Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3f0c1d9843808ef1e9a8f1b975da6aa4cd7d8909131c0b9a0dafbcf65dadcc71\" id:\"eb794de2fa286348aec5e23c58bdbd3e922c6ef09ebf2261f5fbd591f69d5c1a\" pid:5439 exited_at:{seconds:1747310274 nanos:829773840}" May 15 11:58:03.782780 containerd[1874]: time="2025-05-15T11:58:03.782739511Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bcbf402eac03a23df6a4243b08848cc2b567575ae524689ad71498f937d31e23\" id:\"beab2e732b4cd244419a681ce520b8582ae3d659da86701223d901acf6ae4141\" pid:5469 exited_at:{seconds:1747310283 nanos:782536026}" May 15 11:58:24.830275 containerd[1874]: time="2025-05-15T11:58:24.830218338Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3f0c1d9843808ef1e9a8f1b975da6aa4cd7d8909131c0b9a0dafbcf65dadcc71\" id:\"aeb4095e381c76a41722640fdd1634edf8afec780776ac6b486e388924b162a7\" pid:5502 exited_at:{seconds:1747310304 nanos:829897882}" May 15 11:58:25.031814 systemd[1]: Started sshd@7-10.200.20.23:22-10.200.16.10:43948.service - OpenSSH per-connection server daemon (10.200.16.10:43948). May 15 11:58:25.486893 sshd[5515]: Accepted publickey for core from 10.200.16.10 port 43948 ssh2: RSA SHA256:eqZH8i+mbXa4bcBb58m8yxDt9xvP66g2WQqbkjlQjHI May 15 11:58:25.488602 sshd-session[5515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:58:25.495864 systemd-logind[1854]: New session 10 of user core. May 15 11:58:25.499725 systemd[1]: Started session-10.scope - Session 10 of User core. May 15 11:58:25.899156 sshd[5519]: Connection closed by 10.200.16.10 port 43948 May 15 11:58:25.899944 sshd-session[5515]: pam_unix(sshd:session): session closed for user core May 15 11:58:25.903256 systemd-logind[1854]: Session 10 logged out. Waiting for processes to exit. May 15 11:58:25.903360 systemd[1]: sshd@7-10.200.20.23:22-10.200.16.10:43948.service: Deactivated successfully. May 15 11:58:25.905625 systemd[1]: session-10.scope: Deactivated successfully. May 15 11:58:25.907050 systemd-logind[1854]: Removed session 10. May 15 11:58:30.986160 systemd[1]: Started sshd@8-10.200.20.23:22-10.200.16.10:35582.service - OpenSSH per-connection server daemon (10.200.16.10:35582). May 15 11:58:31.440588 sshd[5532]: Accepted publickey for core from 10.200.16.10 port 35582 ssh2: RSA SHA256:eqZH8i+mbXa4bcBb58m8yxDt9xvP66g2WQqbkjlQjHI May 15 11:58:31.441474 sshd-session[5532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:58:31.448187 systemd-logind[1854]: New session 11 of user core. May 15 11:58:31.452577 systemd[1]: Started session-11.scope - Session 11 of User core. May 15 11:58:31.805071 sshd[5534]: Connection closed by 10.200.16.10 port 35582 May 15 11:58:31.805863 sshd-session[5532]: pam_unix(sshd:session): session closed for user core May 15 11:58:31.808703 systemd[1]: sshd@8-10.200.20.23:22-10.200.16.10:35582.service: Deactivated successfully. May 15 11:58:31.810636 systemd[1]: session-11.scope: Deactivated successfully. May 15 11:58:31.813051 systemd-logind[1854]: Session 11 logged out. Waiting for processes to exit. May 15 11:58:31.814500 systemd-logind[1854]: Removed session 11. May 15 11:58:32.423826 containerd[1874]: time="2025-05-15T11:58:32.423786627Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bcbf402eac03a23df6a4243b08848cc2b567575ae524689ad71498f937d31e23\" id:\"1fbdd303e76b67a3f7ce0a2e3de4777d4c0bf2e783f4744ba6880d4b25ea857c\" pid:5558 exited_at:{seconds:1747310312 nanos:423516980}" May 15 11:58:33.782013 containerd[1874]: time="2025-05-15T11:58:33.781850398Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bcbf402eac03a23df6a4243b08848cc2b567575ae524689ad71498f937d31e23\" id:\"00a385551880cfa648d9857bce0c2256cdfae750dd1eb037d3d902cb0d0c3eba\" pid:5579 exited_at:{seconds:1747310313 nanos:781688962}" May 15 11:58:36.879630 systemd[1]: Started sshd@9-10.200.20.23:22-10.200.16.10:35592.service - OpenSSH per-connection server daemon (10.200.16.10:35592). May 15 11:58:37.292983 sshd[5590]: Accepted publickey for core from 10.200.16.10 port 35592 ssh2: RSA SHA256:eqZH8i+mbXa4bcBb58m8yxDt9xvP66g2WQqbkjlQjHI May 15 11:58:37.294072 sshd-session[5590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:58:37.297662 systemd-logind[1854]: New session 12 of user core. May 15 11:58:37.306786 systemd[1]: Started session-12.scope - Session 12 of User core. May 15 11:58:37.645541 sshd[5592]: Connection closed by 10.200.16.10 port 35592 May 15 11:58:37.646171 sshd-session[5590]: pam_unix(sshd:session): session closed for user core May 15 11:58:37.649079 systemd[1]: sshd@9-10.200.20.23:22-10.200.16.10:35592.service: Deactivated successfully. May 15 11:58:37.650556 systemd[1]: session-12.scope: Deactivated successfully. May 15 11:58:37.651207 systemd-logind[1854]: Session 12 logged out. Waiting for processes to exit. May 15 11:58:37.652362 systemd-logind[1854]: Removed session 12. May 15 11:58:37.725363 systemd[1]: Started sshd@10-10.200.20.23:22-10.200.16.10:35602.service - OpenSSH per-connection server daemon (10.200.16.10:35602). May 15 11:58:38.146065 sshd[5605]: Accepted publickey for core from 10.200.16.10 port 35602 ssh2: RSA SHA256:eqZH8i+mbXa4bcBb58m8yxDt9xvP66g2WQqbkjlQjHI May 15 11:58:38.147054 sshd-session[5605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:58:38.150575 systemd-logind[1854]: New session 13 of user core. May 15 11:58:38.155560 systemd[1]: Started session-13.scope - Session 13 of User core. May 15 11:58:38.529644 sshd[5607]: Connection closed by 10.200.16.10 port 35602 May 15 11:58:38.529484 sshd-session[5605]: pam_unix(sshd:session): session closed for user core May 15 11:58:38.532290 systemd-logind[1854]: Session 13 logged out. Waiting for processes to exit. May 15 11:58:38.532830 systemd[1]: sshd@10-10.200.20.23:22-10.200.16.10:35602.service: Deactivated successfully. May 15 11:58:38.536126 systemd[1]: session-13.scope: Deactivated successfully. May 15 11:58:38.541123 systemd-logind[1854]: Removed session 13. May 15 11:58:38.608177 systemd[1]: Started sshd@11-10.200.20.23:22-10.200.16.10:53676.service - OpenSSH per-connection server daemon (10.200.16.10:53676). May 15 11:58:39.049474 sshd[5617]: Accepted publickey for core from 10.200.16.10 port 53676 ssh2: RSA SHA256:eqZH8i+mbXa4bcBb58m8yxDt9xvP66g2WQqbkjlQjHI May 15 11:58:39.050501 sshd-session[5617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:58:39.054467 systemd-logind[1854]: New session 14 of user core. May 15 11:58:39.057552 systemd[1]: Started session-14.scope - Session 14 of User core. May 15 11:58:39.427553 sshd[5619]: Connection closed by 10.200.16.10 port 53676 May 15 11:58:39.428013 sshd-session[5617]: pam_unix(sshd:session): session closed for user core May 15 11:58:39.430163 systemd[1]: sshd@11-10.200.20.23:22-10.200.16.10:53676.service: Deactivated successfully. May 15 11:58:39.432861 systemd[1]: session-14.scope: Deactivated successfully. May 15 11:58:39.434874 systemd-logind[1854]: Session 14 logged out. Waiting for processes to exit. May 15 11:58:39.435681 systemd-logind[1854]: Removed session 14. May 15 11:58:44.507093 systemd[1]: Started sshd@12-10.200.20.23:22-10.200.16.10:53688.service - OpenSSH per-connection server daemon (10.200.16.10:53688). May 15 11:58:44.921521 sshd[5640]: Accepted publickey for core from 10.200.16.10 port 53688 ssh2: RSA SHA256:eqZH8i+mbXa4bcBb58m8yxDt9xvP66g2WQqbkjlQjHI May 15 11:58:44.922601 sshd-session[5640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:58:44.926302 systemd-logind[1854]: New session 15 of user core. May 15 11:58:44.932555 systemd[1]: Started session-15.scope - Session 15 of User core. May 15 11:58:45.276149 sshd[5644]: Connection closed by 10.200.16.10 port 53688 May 15 11:58:45.276654 sshd-session[5640]: pam_unix(sshd:session): session closed for user core May 15 11:58:45.279519 systemd[1]: sshd@12-10.200.20.23:22-10.200.16.10:53688.service: Deactivated successfully. May 15 11:58:45.281270 systemd[1]: session-15.scope: Deactivated successfully. May 15 11:58:45.282418 systemd-logind[1854]: Session 15 logged out. Waiting for processes to exit. May 15 11:58:45.284057 systemd-logind[1854]: Removed session 15. May 15 11:58:48.688594 update_engine[1858]: I20250515 11:58:48.688513 1858 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 15 11:58:48.688594 update_engine[1858]: I20250515 11:58:48.688552 1858 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 15 11:58:48.688918 update_engine[1858]: I20250515 11:58:48.688705 1858 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 15 11:58:48.689573 update_engine[1858]: I20250515 11:58:48.688967 1858 omaha_request_params.cc:62] Current group set to developer May 15 11:58:48.691105 update_engine[1858]: I20250515 11:58:48.690881 1858 update_attempter.cc:499] Already updated boot flags. Skipping. May 15 11:58:48.691105 update_engine[1858]: I20250515 11:58:48.690902 1858 update_attempter.cc:643] Scheduling an action processor start. May 15 11:58:48.691105 update_engine[1858]: I20250515 11:58:48.690920 1858 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 15 11:58:48.691105 update_engine[1858]: I20250515 11:58:48.690953 1858 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 15 11:58:48.691105 update_engine[1858]: I20250515 11:58:48.691001 1858 omaha_request_action.cc:271] Posting an Omaha request to disabled May 15 11:58:48.691105 update_engine[1858]: I20250515 11:58:48.691006 1858 omaha_request_action.cc:272] Request: May 15 11:58:48.691105 update_engine[1858]: May 15 11:58:48.691105 update_engine[1858]: May 15 11:58:48.691105 update_engine[1858]: May 15 11:58:48.691105 update_engine[1858]: May 15 11:58:48.691105 update_engine[1858]: May 15 11:58:48.691105 update_engine[1858]: May 15 11:58:48.691105 update_engine[1858]: May 15 11:58:48.691105 update_engine[1858]: May 15 11:58:48.691105 update_engine[1858]: I20250515 11:58:48.691010 1858 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 11:58:48.693470 update_engine[1858]: I20250515 11:58:48.693401 1858 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 11:58:48.693701 update_engine[1858]: I20250515 11:58:48.693670 1858 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 11:58:48.697220 locksmithd[1973]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 15 11:58:48.732214 update_engine[1858]: E20250515 11:58:48.732179 1858 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 11:58:48.732286 update_engine[1858]: I20250515 11:58:48.732241 1858 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 15 11:58:50.352820 systemd[1]: Started sshd@13-10.200.20.23:22-10.200.16.10:37232.service - OpenSSH per-connection server daemon (10.200.16.10:37232). May 15 11:58:50.771699 sshd[5657]: Accepted publickey for core from 10.200.16.10 port 37232 ssh2: RSA SHA256:eqZH8i+mbXa4bcBb58m8yxDt9xvP66g2WQqbkjlQjHI May 15 11:58:50.772725 sshd-session[5657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:58:50.776095 systemd-logind[1854]: New session 16 of user core. May 15 11:58:50.782543 systemd[1]: Started session-16.scope - Session 16 of User core. May 15 11:58:51.142043 sshd[5659]: Connection closed by 10.200.16.10 port 37232 May 15 11:58:51.142649 sshd-session[5657]: pam_unix(sshd:session): session closed for user core May 15 11:58:51.145505 systemd-logind[1854]: Session 16 logged out. Waiting for processes to exit. May 15 11:58:51.145882 systemd[1]: sshd@13-10.200.20.23:22-10.200.16.10:37232.service: Deactivated successfully. May 15 11:58:51.148603 systemd[1]: session-16.scope: Deactivated successfully. May 15 11:58:51.150340 systemd-logind[1854]: Removed session 16. May 15 11:58:54.829192 containerd[1874]: time="2025-05-15T11:58:54.829148012Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3f0c1d9843808ef1e9a8f1b975da6aa4cd7d8909131c0b9a0dafbcf65dadcc71\" id:\"d20734d8ac076a14a5d0fe6a754502ee5f49f2b0fb13f31597b35836c41bc58d\" pid:5684 exited_at:{seconds:1747310334 nanos:828809380}" May 15 11:58:56.224132 systemd[1]: Started sshd@14-10.200.20.23:22-10.200.16.10:37240.service - OpenSSH per-connection server daemon (10.200.16.10:37240). May 15 11:58:56.637059 sshd[5699]: Accepted publickey for core from 10.200.16.10 port 37240 ssh2: RSA SHA256:eqZH8i+mbXa4bcBb58m8yxDt9xvP66g2WQqbkjlQjHI May 15 11:58:56.638095 sshd-session[5699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:58:56.641661 systemd-logind[1854]: New session 17 of user core. May 15 11:58:56.652714 systemd[1]: Started session-17.scope - Session 17 of User core. May 15 11:58:56.994404 sshd[5701]: Connection closed by 10.200.16.10 port 37240 May 15 11:58:56.995097 sshd-session[5699]: pam_unix(sshd:session): session closed for user core May 15 11:58:56.998451 systemd[1]: sshd@14-10.200.20.23:22-10.200.16.10:37240.service: Deactivated successfully. May 15 11:58:56.999978 systemd[1]: session-17.scope: Deactivated successfully. May 15 11:58:57.000620 systemd-logind[1854]: Session 17 logged out. Waiting for processes to exit. May 15 11:58:57.001818 systemd-logind[1854]: Removed session 17. May 15 11:58:57.073046 systemd[1]: Started sshd@15-10.200.20.23:22-10.200.16.10:37248.service - OpenSSH per-connection server daemon (10.200.16.10:37248). May 15 11:58:57.490600 sshd[5713]: Accepted publickey for core from 10.200.16.10 port 37248 ssh2: RSA SHA256:eqZH8i+mbXa4bcBb58m8yxDt9xvP66g2WQqbkjlQjHI May 15 11:58:57.491703 sshd-session[5713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:58:57.495283 systemd-logind[1854]: New session 18 of user core. May 15 11:58:57.503542 systemd[1]: Started session-18.scope - Session 18 of User core. May 15 11:58:57.903065 sshd[5715]: Connection closed by 10.200.16.10 port 37248 May 15 11:58:57.903553 sshd-session[5713]: pam_unix(sshd:session): session closed for user core May 15 11:58:57.907747 systemd[1]: sshd@15-10.200.20.23:22-10.200.16.10:37248.service: Deactivated successfully. May 15 11:58:57.910590 systemd[1]: session-18.scope: Deactivated successfully. May 15 11:58:57.912795 systemd-logind[1854]: Session 18 logged out. Waiting for processes to exit. May 15 11:58:57.914669 systemd-logind[1854]: Removed session 18. May 15 11:58:57.983792 systemd[1]: Started sshd@16-10.200.20.23:22-10.200.16.10:37254.service - OpenSSH per-connection server daemon (10.200.16.10:37254). May 15 11:58:58.438100 sshd[5725]: Accepted publickey for core from 10.200.16.10 port 37254 ssh2: RSA SHA256:eqZH8i+mbXa4bcBb58m8yxDt9xvP66g2WQqbkjlQjHI May 15 11:58:58.439520 sshd-session[5725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:58:58.442979 systemd-logind[1854]: New session 19 of user core. May 15 11:58:58.450543 systemd[1]: Started session-19.scope - Session 19 of User core. May 15 11:58:58.685564 update_engine[1858]: I20250515 11:58:58.685477 1858 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 11:58:58.685850 update_engine[1858]: I20250515 11:58:58.685672 1858 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 11:58:58.685977 update_engine[1858]: I20250515 11:58:58.685885 1858 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 11:58:58.730827 update_engine[1858]: E20250515 11:58:58.730607 1858 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 11:58:58.730827 update_engine[1858]: I20250515 11:58:58.730661 1858 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 15 11:58:59.467583 sshd[5739]: Connection closed by 10.200.16.10 port 37254 May 15 11:58:59.468109 sshd-session[5725]: pam_unix(sshd:session): session closed for user core May 15 11:58:59.470880 systemd[1]: sshd@16-10.200.20.23:22-10.200.16.10:37254.service: Deactivated successfully. May 15 11:58:59.472415 systemd[1]: session-19.scope: Deactivated successfully. May 15 11:58:59.473022 systemd-logind[1854]: Session 19 logged out. Waiting for processes to exit. May 15 11:58:59.474195 systemd-logind[1854]: Removed session 19. May 15 11:58:59.545737 systemd[1]: Started sshd@17-10.200.20.23:22-10.200.16.10:43610.service - OpenSSH per-connection server daemon (10.200.16.10:43610). May 15 11:58:59.959500 sshd[5757]: Accepted publickey for core from 10.200.16.10 port 43610 ssh2: RSA SHA256:eqZH8i+mbXa4bcBb58m8yxDt9xvP66g2WQqbkjlQjHI May 15 11:58:59.960487 sshd-session[5757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:58:59.965653 systemd-logind[1854]: New session 20 of user core. May 15 11:58:59.969568 systemd[1]: Started session-20.scope - Session 20 of User core. May 15 11:59:00.420526 sshd[5759]: Connection closed by 10.200.16.10 port 43610 May 15 11:59:00.423494 sshd-session[5757]: pam_unix(sshd:session): session closed for user core May 15 11:59:00.428695 systemd-logind[1854]: Session 20 logged out. Waiting for processes to exit. May 15 11:59:00.428833 systemd[1]: sshd@17-10.200.20.23:22-10.200.16.10:43610.service: Deactivated successfully. May 15 11:59:00.431787 systemd[1]: session-20.scope: Deactivated successfully. May 15 11:59:00.433896 systemd-logind[1854]: Removed session 20. May 15 11:59:00.495142 systemd[1]: Started sshd@18-10.200.20.23:22-10.200.16.10:43616.service - OpenSSH per-connection server daemon (10.200.16.10:43616). May 15 11:59:00.904601 sshd[5769]: Accepted publickey for core from 10.200.16.10 port 43616 ssh2: RSA SHA256:eqZH8i+mbXa4bcBb58m8yxDt9xvP66g2WQqbkjlQjHI May 15 11:59:00.905662 sshd-session[5769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:59:00.909248 systemd-logind[1854]: New session 21 of user core. May 15 11:59:00.913551 systemd[1]: Started session-21.scope - Session 21 of User core. May 15 11:59:01.264769 sshd[5771]: Connection closed by 10.200.16.10 port 43616 May 15 11:59:01.265420 sshd-session[5769]: pam_unix(sshd:session): session closed for user core May 15 11:59:01.268162 systemd[1]: sshd@18-10.200.20.23:22-10.200.16.10:43616.service: Deactivated successfully. May 15 11:59:01.269733 systemd[1]: session-21.scope: Deactivated successfully. May 15 11:59:01.270968 systemd-logind[1854]: Session 21 logged out. Waiting for processes to exit. May 15 11:59:01.272893 systemd-logind[1854]: Removed session 21. May 15 11:59:03.780582 containerd[1874]: time="2025-05-15T11:59:03.780530931Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bcbf402eac03a23df6a4243b08848cc2b567575ae524689ad71498f937d31e23\" id:\"9c6b353f0805a63c7d705e8f8169c6a303d592ae4888bf0f2f9bb0c963bb0988\" pid:5798 exited_at:{seconds:1747310343 nanos:780236443}" May 15 11:59:06.347421 systemd[1]: Started sshd@19-10.200.20.23:22-10.200.16.10:43630.service - OpenSSH per-connection server daemon (10.200.16.10:43630). May 15 11:59:06.793231 sshd[5811]: Accepted publickey for core from 10.200.16.10 port 43630 ssh2: RSA SHA256:eqZH8i+mbXa4bcBb58m8yxDt9xvP66g2WQqbkjlQjHI May 15 11:59:06.794310 sshd-session[5811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:59:06.797937 systemd-logind[1854]: New session 22 of user core. May 15 11:59:06.799577 systemd[1]: Started session-22.scope - Session 22 of User core. May 15 11:59:07.152767 sshd[5813]: Connection closed by 10.200.16.10 port 43630 May 15 11:59:07.153534 sshd-session[5811]: pam_unix(sshd:session): session closed for user core May 15 11:59:07.156404 systemd-logind[1854]: Session 22 logged out. Waiting for processes to exit. May 15 11:59:07.156939 systemd[1]: sshd@19-10.200.20.23:22-10.200.16.10:43630.service: Deactivated successfully. May 15 11:59:07.158695 systemd[1]: session-22.scope: Deactivated successfully. May 15 11:59:07.160433 systemd-logind[1854]: Removed session 22. May 15 11:59:08.685562 update_engine[1858]: I20250515 11:59:08.685509 1858 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 11:59:08.686353 update_engine[1858]: I20250515 11:59:08.686069 1858 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 11:59:08.686353 update_engine[1858]: I20250515 11:59:08.686307 1858 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 11:59:08.725560 update_engine[1858]: E20250515 11:59:08.725486 1858 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 11:59:08.725560 update_engine[1858]: I20250515 11:59:08.725538 1858 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 15 11:59:12.230453 systemd[1]: Started sshd@20-10.200.20.23:22-10.200.16.10:42014.service - OpenSSH per-connection server daemon (10.200.16.10:42014). May 15 11:59:12.647079 sshd[5825]: Accepted publickey for core from 10.200.16.10 port 42014 ssh2: RSA SHA256:eqZH8i+mbXa4bcBb58m8yxDt9xvP66g2WQqbkjlQjHI May 15 11:59:12.648199 sshd-session[5825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:59:12.651476 systemd-logind[1854]: New session 23 of user core. May 15 11:59:12.660542 systemd[1]: Started session-23.scope - Session 23 of User core. May 15 11:59:13.001055 sshd[5827]: Connection closed by 10.200.16.10 port 42014 May 15 11:59:13.001606 sshd-session[5825]: pam_unix(sshd:session): session closed for user core May 15 11:59:13.004676 systemd[1]: sshd@20-10.200.20.23:22-10.200.16.10:42014.service: Deactivated successfully. May 15 11:59:13.006180 systemd[1]: session-23.scope: Deactivated successfully. May 15 11:59:13.007246 systemd-logind[1854]: Session 23 logged out. Waiting for processes to exit. May 15 11:59:13.008873 systemd-logind[1854]: Removed session 23. May 15 11:59:18.079881 systemd[1]: Started sshd@21-10.200.20.23:22-10.200.16.10:42022.service - OpenSSH per-connection server daemon (10.200.16.10:42022). May 15 11:59:18.492909 sshd[5839]: Accepted publickey for core from 10.200.16.10 port 42022 ssh2: RSA SHA256:eqZH8i+mbXa4bcBb58m8yxDt9xvP66g2WQqbkjlQjHI May 15 11:59:18.493845 sshd-session[5839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:59:18.497491 systemd-logind[1854]: New session 24 of user core. May 15 11:59:18.501541 systemd[1]: Started session-24.scope - Session 24 of User core. May 15 11:59:18.685602 update_engine[1858]: I20250515 11:59:18.685460 1858 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 11:59:18.685803 update_engine[1858]: I20250515 11:59:18.685735 1858 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 11:59:18.685963 update_engine[1858]: I20250515 11:59:18.685935 1858 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 11:59:18.701112 update_engine[1858]: E20250515 11:59:18.701079 1858 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 11:59:18.701185 update_engine[1858]: I20250515 11:59:18.701130 1858 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 15 11:59:18.701185 update_engine[1858]: I20250515 11:59:18.701138 1858 omaha_request_action.cc:617] Omaha request response: May 15 11:59:18.701221 update_engine[1858]: E20250515 11:59:18.701205 1858 omaha_request_action.cc:636] Omaha request network transfer failed. May 15 11:59:18.701237 update_engine[1858]: I20250515 11:59:18.701220 1858 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 15 11:59:18.701237 update_engine[1858]: I20250515 11:59:18.701223 1858 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 15 11:59:18.701237 update_engine[1858]: I20250515 11:59:18.701227 1858 update_attempter.cc:306] Processing Done. May 15 11:59:18.701275 update_engine[1858]: E20250515 11:59:18.701238 1858 update_attempter.cc:619] Update failed. May 15 11:59:18.701275 update_engine[1858]: I20250515 11:59:18.701242 1858 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 15 11:59:18.701275 update_engine[1858]: I20250515 11:59:18.701246 1858 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 15 11:59:18.701275 update_engine[1858]: I20250515 11:59:18.701249 1858 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 15 11:59:18.701328 update_engine[1858]: I20250515 11:59:18.701299 1858 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 15 11:59:18.701328 update_engine[1858]: I20250515 11:59:18.701316 1858 omaha_request_action.cc:271] Posting an Omaha request to disabled May 15 11:59:18.701328 update_engine[1858]: I20250515 11:59:18.701323 1858 omaha_request_action.cc:272] Request: May 15 11:59:18.701328 update_engine[1858]: May 15 11:59:18.701328 update_engine[1858]: May 15 11:59:18.701328 update_engine[1858]: May 15 11:59:18.701328 update_engine[1858]: May 15 11:59:18.701328 update_engine[1858]: May 15 11:59:18.701328 update_engine[1858]: May 15 11:59:18.701328 update_engine[1858]: I20250515 11:59:18.701326 1858 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 11:59:18.701462 update_engine[1858]: I20250515 11:59:18.701427 1858 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 11:59:18.701725 update_engine[1858]: I20250515 11:59:18.701614 1858 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 11:59:18.701775 locksmithd[1973]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 15 11:59:18.843025 sshd[5841]: Connection closed by 10.200.16.10 port 42022 May 15 11:59:18.843413 sshd-session[5839]: pam_unix(sshd:session): session closed for user core May 15 11:59:18.846074 systemd[1]: sshd@21-10.200.20.23:22-10.200.16.10:42022.service: Deactivated successfully. May 15 11:59:18.848696 systemd[1]: session-24.scope: Deactivated successfully. May 15 11:59:18.850019 systemd-logind[1854]: Session 24 logged out. Waiting for processes to exit. May 15 11:59:18.851297 systemd-logind[1854]: Removed session 24. May 15 11:59:18.986463 update_engine[1858]: E20250515 11:59:18.986389 1858 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 11:59:18.986463 update_engine[1858]: I20250515 11:59:18.986464 1858 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 15 11:59:18.986557 update_engine[1858]: I20250515 11:59:18.986471 1858 omaha_request_action.cc:617] Omaha request response: May 15 11:59:18.986557 update_engine[1858]: I20250515 11:59:18.986477 1858 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 15 11:59:18.986557 update_engine[1858]: I20250515 11:59:18.986480 1858 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 15 11:59:18.986557 update_engine[1858]: I20250515 11:59:18.986483 1858 update_attempter.cc:306] Processing Done. May 15 11:59:18.986557 update_engine[1858]: I20250515 11:59:18.986487 1858 update_attempter.cc:310] Error event sent. May 15 11:59:18.986557 update_engine[1858]: I20250515 11:59:18.986495 1858 update_check_scheduler.cc:74] Next update check in 49m3s May 15 11:59:18.986792 locksmithd[1973]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 15 11:59:23.940196 systemd[1]: Started sshd@22-10.200.20.23:22-10.200.16.10:42984.service - OpenSSH per-connection server daemon (10.200.16.10:42984). May 15 11:59:24.394042 sshd[5853]: Accepted publickey for core from 10.200.16.10 port 42984 ssh2: RSA SHA256:eqZH8i+mbXa4bcBb58m8yxDt9xvP66g2WQqbkjlQjHI May 15 11:59:24.395168 sshd-session[5853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:59:24.399238 systemd-logind[1854]: New session 25 of user core. May 15 11:59:24.407555 systemd[1]: Started session-25.scope - Session 25 of User core. May 15 11:59:24.760031 sshd[5855]: Connection closed by 10.200.16.10 port 42984 May 15 11:59:24.760694 sshd-session[5853]: pam_unix(sshd:session): session closed for user core May 15 11:59:24.763421 systemd[1]: sshd@22-10.200.20.23:22-10.200.16.10:42984.service: Deactivated successfully. May 15 11:59:24.765109 systemd[1]: session-25.scope: Deactivated successfully. May 15 11:59:24.765874 systemd-logind[1854]: Session 25 logged out. Waiting for processes to exit. May 15 11:59:24.766972 systemd-logind[1854]: Removed session 25. May 15 11:59:24.828462 containerd[1874]: time="2025-05-15T11:59:24.828348589Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3f0c1d9843808ef1e9a8f1b975da6aa4cd7d8909131c0b9a0dafbcf65dadcc71\" id:\"c1f490264ae6643df75b30cf9956c29179ad4ca42ffd91d9b70b730bcbb73f2e\" pid:5878 exited_at:{seconds:1747310364 nanos:827507520}" May 15 11:59:29.836914 systemd[1]: Started sshd@23-10.200.20.23:22-10.200.16.10:41984.service - OpenSSH per-connection server daemon (10.200.16.10:41984). May 15 11:59:30.257536 sshd[5895]: Accepted publickey for core from 10.200.16.10 port 41984 ssh2: RSA SHA256:eqZH8i+mbXa4bcBb58m8yxDt9xvP66g2WQqbkjlQjHI May 15 11:59:30.258588 sshd-session[5895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:59:30.262105 systemd-logind[1854]: New session 26 of user core. May 15 11:59:30.272555 systemd[1]: Started session-26.scope - Session 26 of User core. May 15 11:59:30.609927 sshd[5897]: Connection closed by 10.200.16.10 port 41984 May 15 11:59:30.610618 sshd-session[5895]: pam_unix(sshd:session): session closed for user core May 15 11:59:30.613484 systemd[1]: sshd@23-10.200.20.23:22-10.200.16.10:41984.service: Deactivated successfully. May 15 11:59:30.615845 systemd[1]: session-26.scope: Deactivated successfully. May 15 11:59:30.620492 systemd-logind[1854]: Session 26 logged out. Waiting for processes to exit. May 15 11:59:30.621424 systemd-logind[1854]: Removed session 26. May 15 11:59:32.423788 containerd[1874]: time="2025-05-15T11:59:32.423748480Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bcbf402eac03a23df6a4243b08848cc2b567575ae524689ad71498f937d31e23\" id:\"0479e7856598cb06a5fa60ef1f99b975d0fd41fa243ec7591985e9c6995360d8\" pid:5920 exited_at:{seconds:1747310372 nanos:423415048}" May 15 11:59:33.782673 containerd[1874]: time="2025-05-15T11:59:33.782630283Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bcbf402eac03a23df6a4243b08848cc2b567575ae524689ad71498f937d31e23\" id:\"473ec9bebafa683c84706be90db36cdf4ac61b9c7bdfd37d272de8be39777c30\" pid:5941 exited_at:{seconds:1747310373 nanos:782491136}" May 15 11:59:35.692941 systemd[1]: Started sshd@24-10.200.20.23:22-10.200.16.10:41996.service - OpenSSH per-connection server daemon (10.200.16.10:41996). May 15 11:59:36.151095 sshd[5950]: Accepted publickey for core from 10.200.16.10 port 41996 ssh2: RSA SHA256:eqZH8i+mbXa4bcBb58m8yxDt9xvP66g2WQqbkjlQjHI May 15 11:59:36.152082 sshd-session[5950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 11:59:36.155478 systemd-logind[1854]: New session 27 of user core. May 15 11:59:36.162551 systemd[1]: Started session-27.scope - Session 27 of User core. May 15 11:59:36.513595 sshd[5952]: Connection closed by 10.200.16.10 port 41996 May 15 11:59:36.512929 sshd-session[5950]: pam_unix(sshd:session): session closed for user core May 15 11:59:36.515373 systemd-logind[1854]: Session 27 logged out. Waiting for processes to exit. May 15 11:59:36.515502 systemd[1]: sshd@24-10.200.20.23:22-10.200.16.10:41996.service: Deactivated successfully. May 15 11:59:36.516780 systemd[1]: session-27.scope: Deactivated successfully. May 15 11:59:36.518606 systemd-logind[1854]: Removed session 27.