Jul 6 23:45:45.070357 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Jul 6 23:45:45.070376 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Sun Jul 6 21:52:18 -00 2025 Jul 6 23:45:45.070382 kernel: KASLR enabled Jul 6 23:45:45.070386 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jul 6 23:45:45.070391 kernel: printk: legacy bootconsole [pl11] enabled Jul 6 23:45:45.070395 kernel: efi: EFI v2.7 by EDK II Jul 6 23:45:45.070400 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e018 RNG=0x3fd5f998 MEMRESERVE=0x3e471598 Jul 6 23:45:45.070404 kernel: random: crng init done Jul 6 23:45:45.070408 kernel: secureboot: Secure boot disabled Jul 6 23:45:45.070412 kernel: ACPI: Early table checksum verification disabled Jul 6 23:45:45.070415 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jul 6 23:45:45.070419 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:45:45.070423 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:45:45.070428 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jul 6 23:45:45.070433 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:45:45.070437 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:45:45.070442 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:45:45.070446 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:45:45.070451 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:45:45.070455 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:45:45.070459 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jul 6 23:45:45.070463 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:45:45.070467 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jul 6 23:45:45.070471 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 6 23:45:45.070476 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jul 6 23:45:45.070480 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Jul 6 23:45:45.070484 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Jul 6 23:45:45.070488 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jul 6 23:45:45.070492 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jul 6 23:45:45.070497 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jul 6 23:45:45.070501 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jul 6 23:45:45.070505 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jul 6 23:45:45.070509 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jul 6 23:45:45.070513 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jul 6 23:45:45.070517 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jul 6 23:45:45.070521 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jul 6 23:45:45.070526 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Jul 6 23:45:45.070530 kernel: NODE_DATA(0) allocated [mem 0x1bf7fddc0-0x1bf804fff] Jul 6 23:45:45.070534 kernel: Zone ranges: Jul 6 23:45:45.070538 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jul 6 23:45:45.070545 kernel: DMA32 empty Jul 6 23:45:45.070549 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jul 6 23:45:45.070553 kernel: Device empty Jul 6 23:45:45.070558 kernel: Movable zone start for each node Jul 6 23:45:45.070562 kernel: Early memory node ranges Jul 6 23:45:45.070567 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jul 6 23:45:45.070571 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Jul 6 23:45:45.070576 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Jul 6 23:45:45.070580 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Jul 6 23:45:45.070584 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jul 6 23:45:45.070589 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jul 6 23:45:45.070593 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jul 6 23:45:45.070597 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jul 6 23:45:45.070602 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jul 6 23:45:45.070606 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jul 6 23:45:45.070610 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jul 6 23:45:45.070614 kernel: psci: probing for conduit method from ACPI. Jul 6 23:45:45.070619 kernel: psci: PSCIv1.1 detected in firmware. Jul 6 23:45:45.070624 kernel: psci: Using standard PSCI v0.2 function IDs Jul 6 23:45:45.070628 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 6 23:45:45.070632 kernel: psci: SMC Calling Convention v1.4 Jul 6 23:45:45.070637 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jul 6 23:45:45.070641 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jul 6 23:45:45.070645 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 6 23:45:45.070650 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 6 23:45:45.070654 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 6 23:45:45.070658 kernel: Detected PIPT I-cache on CPU0 Jul 6 23:45:45.070663 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Jul 6 23:45:45.070668 kernel: CPU features: detected: GIC system register CPU interface Jul 6 23:45:45.070672 kernel: CPU features: detected: Spectre-v4 Jul 6 23:45:45.070676 kernel: CPU features: detected: Spectre-BHB Jul 6 23:45:45.070681 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 6 23:45:45.070685 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 6 23:45:45.070690 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Jul 6 23:45:45.070694 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 6 23:45:45.070698 kernel: alternatives: applying boot alternatives Jul 6 23:45:45.070703 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=dd2d39de40482a23e9bb75390ff5ca85cd9bd34d902b8049121a8373f8cb2ef2 Jul 6 23:45:45.070708 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:45:45.070712 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 6 23:45:45.070717 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:45:45.070722 kernel: Fallback order for Node 0: 0 Jul 6 23:45:45.070726 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Jul 6 23:45:45.070730 kernel: Policy zone: Normal Jul 6 23:45:45.070735 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:45:45.070739 kernel: software IO TLB: area num 2. Jul 6 23:45:45.070744 kernel: software IO TLB: mapped [mem 0x000000003a460000-0x000000003e460000] (64MB) Jul 6 23:45:45.070748 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 6 23:45:45.070752 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:45:45.070757 kernel: rcu: RCU event tracing is enabled. Jul 6 23:45:45.070762 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 6 23:45:45.070767 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:45:45.070771 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:45:45.070776 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:45:45.070780 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 6 23:45:45.070785 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:45:45.070789 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:45:45.070793 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 6 23:45:45.070798 kernel: GICv3: 960 SPIs implemented Jul 6 23:45:45.070802 kernel: GICv3: 0 Extended SPIs implemented Jul 6 23:45:45.070806 kernel: Root IRQ handler: gic_handle_irq Jul 6 23:45:45.070810 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jul 6 23:45:45.070815 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Jul 6 23:45:45.070820 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jul 6 23:45:45.070824 kernel: ITS: No ITS available, not enabling LPIs Jul 6 23:45:45.070829 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:45:45.070833 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Jul 6 23:45:45.070838 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 6 23:45:45.070842 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Jul 6 23:45:45.070847 kernel: Console: colour dummy device 80x25 Jul 6 23:45:45.070851 kernel: printk: legacy console [tty1] enabled Jul 6 23:45:45.070856 kernel: ACPI: Core revision 20240827 Jul 6 23:45:45.070860 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Jul 6 23:45:45.070866 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:45:45.070870 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 6 23:45:45.070875 kernel: landlock: Up and running. Jul 6 23:45:45.070879 kernel: SELinux: Initializing. Jul 6 23:45:45.070884 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:45:45.070888 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:45:45.070896 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x1a0000e, misc 0x31e1 Jul 6 23:45:45.070901 kernel: Hyper-V: Host Build 10.0.26100.1261-1-0 Jul 6 23:45:45.070906 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 6 23:45:45.070911 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:45:45.070915 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:45:45.070920 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 6 23:45:45.070925 kernel: Remapping and enabling EFI services. Jul 6 23:45:45.070930 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:45:45.070935 kernel: Detected PIPT I-cache on CPU1 Jul 6 23:45:45.070940 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jul 6 23:45:45.070944 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Jul 6 23:45:45.070950 kernel: smp: Brought up 1 node, 2 CPUs Jul 6 23:45:45.070954 kernel: SMP: Total of 2 processors activated. Jul 6 23:45:45.070959 kernel: CPU: All CPU(s) started at EL1 Jul 6 23:45:45.070964 kernel: CPU features: detected: 32-bit EL0 Support Jul 6 23:45:45.070968 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jul 6 23:45:45.070973 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 6 23:45:45.070978 kernel: CPU features: detected: Common not Private translations Jul 6 23:45:45.070983 kernel: CPU features: detected: CRC32 instructions Jul 6 23:45:45.070987 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Jul 6 23:45:45.070993 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 6 23:45:45.070998 kernel: CPU features: detected: LSE atomic instructions Jul 6 23:45:45.071002 kernel: CPU features: detected: Privileged Access Never Jul 6 23:45:45.071007 kernel: CPU features: detected: Speculation barrier (SB) Jul 6 23:45:45.071011 kernel: CPU features: detected: TLB range maintenance instructions Jul 6 23:45:45.071016 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 6 23:45:45.071021 kernel: CPU features: detected: Scalable Vector Extension Jul 6 23:45:45.071026 kernel: alternatives: applying system-wide alternatives Jul 6 23:45:45.071030 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jul 6 23:45:45.071036 kernel: SVE: maximum available vector length 16 bytes per vector Jul 6 23:45:45.071041 kernel: SVE: default vector length 16 bytes per vector Jul 6 23:45:45.071046 kernel: Memory: 3975672K/4194160K available (11072K kernel code, 2428K rwdata, 9032K rodata, 39424K init, 1035K bss, 213688K reserved, 0K cma-reserved) Jul 6 23:45:45.071050 kernel: devtmpfs: initialized Jul 6 23:45:45.071055 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:45:45.071060 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 6 23:45:45.071065 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 6 23:45:45.071069 kernel: 0 pages in range for non-PLT usage Jul 6 23:45:45.071074 kernel: 508480 pages in range for PLT usage Jul 6 23:45:45.071079 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:45:45.071084 kernel: SMBIOS 3.1.0 present. Jul 6 23:45:45.071089 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jul 6 23:45:45.071094 kernel: DMI: Memory slots populated: 2/2 Jul 6 23:45:45.071098 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:45:45.071103 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 6 23:45:45.071108 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 6 23:45:45.071113 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 6 23:45:45.071117 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:45:45.071123 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Jul 6 23:45:45.071128 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:45:45.071132 kernel: cpuidle: using governor menu Jul 6 23:45:45.071137 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 6 23:45:45.071142 kernel: ASID allocator initialised with 32768 entries Jul 6 23:45:45.071146 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:45:45.071151 kernel: Serial: AMBA PL011 UART driver Jul 6 23:45:45.071156 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:45:45.071161 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:45:45.071166 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 6 23:45:45.071171 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 6 23:45:45.071175 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:45:45.071180 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:45:45.071185 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 6 23:45:45.071190 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 6 23:45:45.071204 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:45:45.071209 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:45:45.071213 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:45:45.071219 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:45:45.071224 kernel: ACPI: Interpreter enabled Jul 6 23:45:45.071229 kernel: ACPI: Using GIC for interrupt routing Jul 6 23:45:45.071234 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jul 6 23:45:45.071238 kernel: printk: legacy console [ttyAMA0] enabled Jul 6 23:45:45.071243 kernel: printk: legacy bootconsole [pl11] disabled Jul 6 23:45:45.071248 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jul 6 23:45:45.071252 kernel: ACPI: CPU0 has been hot-added Jul 6 23:45:45.071257 kernel: ACPI: CPU1 has been hot-added Jul 6 23:45:45.071263 kernel: iommu: Default domain type: Translated Jul 6 23:45:45.071267 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 6 23:45:45.071272 kernel: efivars: Registered efivars operations Jul 6 23:45:45.071277 kernel: vgaarb: loaded Jul 6 23:45:45.071282 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 6 23:45:45.071286 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:45:45.071291 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:45:45.071296 kernel: pnp: PnP ACPI init Jul 6 23:45:45.071300 kernel: pnp: PnP ACPI: found 0 devices Jul 6 23:45:45.071306 kernel: NET: Registered PF_INET protocol family Jul 6 23:45:45.071311 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 6 23:45:45.071315 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 6 23:45:45.071320 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:45:45.071325 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:45:45.071330 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 6 23:45:45.071335 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 6 23:45:45.071339 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:45:45.071344 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:45:45.071349 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:45:45.071354 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:45:45.071359 kernel: kvm [1]: HYP mode not available Jul 6 23:45:45.071364 kernel: Initialise system trusted keyrings Jul 6 23:45:45.071368 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 6 23:45:45.071373 kernel: Key type asymmetric registered Jul 6 23:45:45.071378 kernel: Asymmetric key parser 'x509' registered Jul 6 23:45:45.071382 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 6 23:45:45.071387 kernel: io scheduler mq-deadline registered Jul 6 23:45:45.071393 kernel: io scheduler kyber registered Jul 6 23:45:45.071398 kernel: io scheduler bfq registered Jul 6 23:45:45.071402 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:45:45.071407 kernel: thunder_xcv, ver 1.0 Jul 6 23:45:45.071411 kernel: thunder_bgx, ver 1.0 Jul 6 23:45:45.071416 kernel: nicpf, ver 1.0 Jul 6 23:45:45.071421 kernel: nicvf, ver 1.0 Jul 6 23:45:45.071526 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 6 23:45:45.071577 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-06T23:45:44 UTC (1751845544) Jul 6 23:45:45.071584 kernel: efifb: probing for efifb Jul 6 23:45:45.071589 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 6 23:45:45.071593 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 6 23:45:45.071598 kernel: efifb: scrolling: redraw Jul 6 23:45:45.071603 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 6 23:45:45.071608 kernel: Console: switching to colour frame buffer device 128x48 Jul 6 23:45:45.071613 kernel: fb0: EFI VGA frame buffer device Jul 6 23:45:45.071617 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jul 6 23:45:45.071623 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 6 23:45:45.071628 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 6 23:45:45.071632 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:45:45.071637 kernel: watchdog: NMI not fully supported Jul 6 23:45:45.071642 kernel: watchdog: Hard watchdog permanently disabled Jul 6 23:45:45.073223 kernel: Segment Routing with IPv6 Jul 6 23:45:45.073242 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:45:45.073248 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:45:45.073253 kernel: Key type dns_resolver registered Jul 6 23:45:45.073264 kernel: registered taskstats version 1 Jul 6 23:45:45.073268 kernel: Loading compiled-in X.509 certificates Jul 6 23:45:45.073274 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: 90fb300ebe1fa0773739bb35dad461c5679d8dfb' Jul 6 23:45:45.073278 kernel: Demotion targets for Node 0: null Jul 6 23:45:45.073283 kernel: Key type .fscrypt registered Jul 6 23:45:45.073288 kernel: Key type fscrypt-provisioning registered Jul 6 23:45:45.073293 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:45:45.073298 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:45:45.073302 kernel: ima: No architecture policies found Jul 6 23:45:45.073308 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 6 23:45:45.073313 kernel: clk: Disabling unused clocks Jul 6 23:45:45.073318 kernel: PM: genpd: Disabling unused power domains Jul 6 23:45:45.073323 kernel: Warning: unable to open an initial console. Jul 6 23:45:45.073328 kernel: Freeing unused kernel memory: 39424K Jul 6 23:45:45.073332 kernel: Run /init as init process Jul 6 23:45:45.073337 kernel: with arguments: Jul 6 23:45:45.073342 kernel: /init Jul 6 23:45:45.073346 kernel: with environment: Jul 6 23:45:45.073352 kernel: HOME=/ Jul 6 23:45:45.073357 kernel: TERM=linux Jul 6 23:45:45.073361 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:45:45.073367 systemd[1]: Successfully made /usr/ read-only. Jul 6 23:45:45.073375 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:45:45.073380 systemd[1]: Detected virtualization microsoft. Jul 6 23:45:45.073385 systemd[1]: Detected architecture arm64. Jul 6 23:45:45.073391 systemd[1]: Running in initrd. Jul 6 23:45:45.073396 systemd[1]: No hostname configured, using default hostname. Jul 6 23:45:45.073402 systemd[1]: Hostname set to . Jul 6 23:45:45.073407 systemd[1]: Initializing machine ID from random generator. Jul 6 23:45:45.073412 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:45:45.073417 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:45:45.073422 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:45:45.073428 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:45:45.073434 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:45:45.073439 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:45:45.073445 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:45:45.073451 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:45:45.073456 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:45:45.073461 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:45:45.073466 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:45:45.073472 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:45:45.073478 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:45:45.073483 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:45:45.073488 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:45:45.073493 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:45:45.073498 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:45:45.073503 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:45:45.073509 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 6 23:45:45.073514 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:45:45.073520 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:45:45.073525 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:45:45.073530 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:45:45.073536 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:45:45.073541 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:45:45.073546 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:45:45.073551 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 6 23:45:45.073557 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:45:45.073563 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:45:45.073568 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:45:45.073593 systemd-journald[224]: Collecting audit messages is disabled. Jul 6 23:45:45.073608 systemd-journald[224]: Journal started Jul 6 23:45:45.073623 systemd-journald[224]: Runtime Journal (/run/log/journal/9fb4e4f20bf442d681300a78ed6ce3ba) is 8M, max 78.5M, 70.5M free. Jul 6 23:45:45.079480 systemd-modules-load[226]: Inserted module 'overlay' Jul 6 23:45:45.084456 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:45:45.095210 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:45:45.095242 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:45:45.108275 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:45:45.117175 kernel: Bridge firewalling registered Jul 6 23:45:45.110807 systemd-modules-load[226]: Inserted module 'br_netfilter' Jul 6 23:45:45.129687 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:45:45.135632 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:45:45.139354 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:45:45.152208 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:45:45.160333 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:45:45.179662 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:45:45.184022 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:45:45.195443 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:45:45.213187 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:45:45.218616 systemd-tmpfiles[242]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 6 23:45:45.226427 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:45:45.237522 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:45:45.248218 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:45:45.261420 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:45:45.291339 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:45:45.298920 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:45:45.319843 dracut-cmdline[259]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=dd2d39de40482a23e9bb75390ff5ca85cd9bd34d902b8049121a8373f8cb2ef2 Jul 6 23:45:45.347259 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:45:45.373074 systemd-resolved[261]: Positive Trust Anchors: Jul 6 23:45:45.373092 systemd-resolved[261]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:45:45.373114 systemd-resolved[261]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:45:45.430471 kernel: SCSI subsystem initialized Jul 6 23:45:45.430491 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:45:45.375930 systemd-resolved[261]: Defaulting to hostname 'linux'. Jul 6 23:45:45.382533 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:45:45.422035 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:45:45.446204 kernel: iscsi: registered transport (tcp) Jul 6 23:45:45.460254 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:45:45.460291 kernel: QLogic iSCSI HBA Driver Jul 6 23:45:45.473271 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:45:45.491407 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:45:45.504420 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:45:45.545539 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:45:45.552315 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:45:45.613212 kernel: raid6: neonx8 gen() 18534 MB/s Jul 6 23:45:45.632272 kernel: raid6: neonx4 gen() 18568 MB/s Jul 6 23:45:45.651270 kernel: raid6: neonx2 gen() 17068 MB/s Jul 6 23:45:45.671201 kernel: raid6: neonx1 gen() 15013 MB/s Jul 6 23:45:45.690273 kernel: raid6: int64x8 gen() 10541 MB/s Jul 6 23:45:45.711211 kernel: raid6: int64x4 gen() 10606 MB/s Jul 6 23:45:45.731201 kernel: raid6: int64x2 gen() 8983 MB/s Jul 6 23:45:45.752959 kernel: raid6: int64x1 gen() 7013 MB/s Jul 6 23:45:45.752991 kernel: raid6: using algorithm neonx4 gen() 18568 MB/s Jul 6 23:45:45.775037 kernel: raid6: .... xor() 15050 MB/s, rmw enabled Jul 6 23:45:45.775075 kernel: raid6: using neon recovery algorithm Jul 6 23:45:45.783166 kernel: xor: measuring software checksum speed Jul 6 23:45:45.783174 kernel: 8regs : 28603 MB/sec Jul 6 23:45:45.785655 kernel: 32regs : 28832 MB/sec Jul 6 23:45:45.788359 kernel: arm64_neon : 37604 MB/sec Jul 6 23:45:45.792321 kernel: xor: using function: arm64_neon (37604 MB/sec) Jul 6 23:45:45.829207 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:45:45.834143 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:45:45.843217 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:45:45.870371 systemd-udevd[473]: Using default interface naming scheme 'v255'. Jul 6 23:45:45.874558 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:45:45.884318 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:45:45.925380 dracut-pre-trigger[487]: rd.md=0: removing MD RAID activation Jul 6 23:45:45.944467 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:45:45.950738 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:45:45.996132 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:45:46.009249 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:45:46.064215 kernel: hv_vmbus: Vmbus version:5.3 Jul 6 23:45:46.078609 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:45:46.098718 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 6 23:45:46.098746 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 6 23:45:46.098754 kernel: PTP clock support registered Jul 6 23:45:46.082553 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:45:46.104148 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:45:46.128275 kernel: hv_vmbus: registering driver hv_netvsc Jul 6 23:45:46.128292 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 6 23:45:46.128299 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 6 23:45:46.111869 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:45:46.142113 kernel: hv_vmbus: registering driver hv_storvsc Jul 6 23:45:46.133092 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:45:46.143699 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:45:46.143935 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:45:46.186851 kernel: hv_utils: Registering HyperV Utility Driver Jul 6 23:45:46.186875 kernel: hv_vmbus: registering driver hid_hyperv Jul 6 23:45:46.186882 kernel: hv_vmbus: registering driver hv_utils Jul 6 23:45:46.186888 kernel: scsi host0: storvsc_host_t Jul 6 23:45:46.162304 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:45:46.149584 kernel: scsi host1: storvsc_host_t Jul 6 23:45:46.156269 kernel: hv_utils: Heartbeat IC version 3.0 Jul 6 23:45:46.156283 kernel: scsi 1:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 6 23:45:46.156402 kernel: hv_utils: Shutdown IC version 3.2 Jul 6 23:45:46.156408 kernel: scsi 1:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jul 6 23:45:46.156420 kernel: hv_netvsc 000d3a06-3b5e-000d-3a06-3b5e000d3a06 eth0: VF slot 1 added Jul 6 23:45:46.156487 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 6 23:45:46.156492 kernel: hv_utils: TimeSync IC version 4.0 Jul 6 23:45:46.156499 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 6 23:45:46.156556 systemd-journald[224]: Time jumped backwards, rotating. Jul 6 23:45:46.143932 systemd-resolved[261]: Clock change detected. Flushing caches. Jul 6 23:45:46.157464 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:45:46.182772 kernel: hv_vmbus: registering driver hv_pci Jul 6 23:45:46.182798 kernel: hv_pci 746db324-cf67-40c5-b179-4d9bc8d79d2f: PCI VMBus probing: Using version 0x10004 Jul 6 23:45:46.182938 kernel: hv_pci 746db324-cf67-40c5-b179-4d9bc8d79d2f: PCI host bridge to bus cf67:00 Jul 6 23:45:46.193911 kernel: sd 1:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 6 23:45:46.194041 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Jul 6 23:45:46.194108 kernel: pci_bus cf67:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jul 6 23:45:46.201105 kernel: sd 1:0:0:0: [sda] Write Protect is off Jul 6 23:45:46.201211 kernel: pci_bus cf67:00: No busn resource found for root bus, will use [bus 00-ff] Jul 6 23:45:46.206247 kernel: sd 1:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 6 23:45:46.206331 kernel: pci cf67:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Jul 6 23:45:46.214494 kernel: sd 1:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 6 23:45:46.214582 kernel: pci cf67:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 6 23:45:46.223480 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 6 23:45:46.223560 kernel: pci cf67:00:02.0: enabling Extended Tags Jul 6 23:45:46.235112 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#292 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 6 23:45:46.235241 kernel: pci cf67:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at cf67:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Jul 6 23:45:46.253702 kernel: pci_bus cf67:00: busn_res: [bus 00-ff] end is updated to 00 Jul 6 23:45:46.253782 kernel: pci cf67:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Jul 6 23:45:46.264117 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:45:46.264146 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Jul 6 23:45:46.272340 kernel: sr 1:0:0:2: [sr0] scsi-1 drive Jul 6 23:45:46.272485 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 6 23:45:46.273751 kernel: sr 1:0:0:2: Attached scsi CD-ROM sr0 Jul 6 23:45:46.291773 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#275 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 6 23:45:46.316810 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#295 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 6 23:45:46.332794 kernel: mlx5_core cf67:00:02.0: enabling device (0000 -> 0002) Jul 6 23:45:46.340579 kernel: mlx5_core cf67:00:02.0: PTM is not supported by PCIe Jul 6 23:45:46.340757 kernel: mlx5_core cf67:00:02.0: firmware version: 16.30.5006 Jul 6 23:45:46.507751 kernel: hv_netvsc 000d3a06-3b5e-000d-3a06-3b5e000d3a06 eth0: VF registering: eth1 Jul 6 23:45:46.507954 kernel: mlx5_core cf67:00:02.0 eth1: joined to eth0 Jul 6 23:45:46.513145 kernel: mlx5_core cf67:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jul 6 23:45:46.523770 kernel: mlx5_core cf67:00:02.0 enP53095s1: renamed from eth1 Jul 6 23:45:46.851698 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 6 23:45:46.868890 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 6 23:45:46.925377 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 6 23:45:46.940393 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 6 23:45:46.945537 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 6 23:45:46.962622 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:45:46.967489 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:45:46.973057 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:45:46.982089 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:45:46.993551 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:45:47.013375 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:45:47.032750 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#297 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 6 23:45:47.045847 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:45:47.050385 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:45:48.071939 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#161 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 6 23:45:48.083854 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:45:48.083887 disk-uuid[665]: The operation has completed successfully. Jul 6 23:45:48.139301 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:45:48.139394 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:45:48.167809 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:45:48.183753 sh[825]: Success Jul 6 23:45:48.218470 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:45:48.218529 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:45:48.223088 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 6 23:45:48.231756 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 6 23:45:48.423199 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:45:48.430490 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:45:48.446949 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:45:48.470254 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 6 23:45:48.470286 kernel: BTRFS: device fsid aa7ffdf7-f152-4ceb-bd0e-b3b3f8f8b296 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (843) Jul 6 23:45:48.479787 kernel: BTRFS info (device dm-0): first mount of filesystem aa7ffdf7-f152-4ceb-bd0e-b3b3f8f8b296 Jul 6 23:45:48.479810 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:45:48.482851 kernel: BTRFS info (device dm-0): using free-space-tree Jul 6 23:45:48.762589 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:45:48.766167 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 6 23:45:48.774368 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:45:48.774954 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:45:48.801122 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:45:48.826771 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (867) Jul 6 23:45:48.837006 kernel: BTRFS info (device sda6): first mount of filesystem 492b2e2a-5dd7-445f-b930-e9dd6acadf93 Jul 6 23:45:48.837036 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:45:48.840131 kernel: BTRFS info (device sda6): using free-space-tree Jul 6 23:45:48.875763 kernel: BTRFS info (device sda6): last unmount of filesystem 492b2e2a-5dd7-445f-b930-e9dd6acadf93 Jul 6 23:45:48.876353 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:45:48.882568 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:45:48.919773 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:45:48.932785 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:45:48.973391 systemd-networkd[1012]: lo: Link UP Jul 6 23:45:48.973401 systemd-networkd[1012]: lo: Gained carrier Jul 6 23:45:48.974572 systemd-networkd[1012]: Enumeration completed Jul 6 23:45:48.976705 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:45:48.981348 systemd-networkd[1012]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:45:48.981351 systemd-networkd[1012]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:45:48.981881 systemd[1]: Reached target network.target - Network. Jul 6 23:45:49.050756 kernel: mlx5_core cf67:00:02.0 enP53095s1: Link up Jul 6 23:45:49.085756 kernel: hv_netvsc 000d3a06-3b5e-000d-3a06-3b5e000d3a06 eth0: Data path switched to VF: enP53095s1 Jul 6 23:45:49.086562 systemd-networkd[1012]: enP53095s1: Link UP Jul 6 23:45:49.086767 systemd-networkd[1012]: eth0: Link UP Jul 6 23:45:49.087070 systemd-networkd[1012]: eth0: Gained carrier Jul 6 23:45:49.087078 systemd-networkd[1012]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:45:49.097076 systemd-networkd[1012]: enP53095s1: Gained carrier Jul 6 23:45:49.111762 systemd-networkd[1012]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 6 23:45:50.266592 ignition[965]: Ignition 2.21.0 Jul 6 23:45:50.266608 ignition[965]: Stage: fetch-offline Jul 6 23:45:50.269820 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:45:50.266687 ignition[965]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:45:50.276024 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 6 23:45:50.266693 ignition[965]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:45:50.266806 ignition[965]: parsed url from cmdline: "" Jul 6 23:45:50.266809 ignition[965]: no config URL provided Jul 6 23:45:50.266812 ignition[965]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:45:50.266817 ignition[965]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:45:50.266820 ignition[965]: failed to fetch config: resource requires networking Jul 6 23:45:50.266946 ignition[965]: Ignition finished successfully Jul 6 23:45:50.302059 ignition[1022]: Ignition 2.21.0 Jul 6 23:45:50.302064 ignition[1022]: Stage: fetch Jul 6 23:45:50.302222 ignition[1022]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:45:50.302229 ignition[1022]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:45:50.302309 ignition[1022]: parsed url from cmdline: "" Jul 6 23:45:50.302311 ignition[1022]: no config URL provided Jul 6 23:45:50.302315 ignition[1022]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:45:50.302323 ignition[1022]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:45:50.302360 ignition[1022]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 6 23:45:50.366757 ignition[1022]: GET result: OK Jul 6 23:45:50.366819 ignition[1022]: config has been read from IMDS userdata Jul 6 23:45:50.369143 unknown[1022]: fetched base config from "system" Jul 6 23:45:50.366839 ignition[1022]: parsing config with SHA512: 9de178ae47c223827f11ef1a11e491fe915b6eb274a9f199ab238b0e70b8270a5fdfe42cc39c8ce840a961e9f3f289f3e3a1479524a47a19b04f58bd9bc4eeae Jul 6 23:45:50.369154 unknown[1022]: fetched base config from "system" Jul 6 23:45:50.370168 ignition[1022]: fetch: fetch complete Jul 6 23:45:50.369157 unknown[1022]: fetched user config from "azure" Jul 6 23:45:50.370173 ignition[1022]: fetch: fetch passed Jul 6 23:45:50.373269 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 6 23:45:50.371324 ignition[1022]: Ignition finished successfully Jul 6 23:45:50.380563 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:45:50.419140 ignition[1029]: Ignition 2.21.0 Jul 6 23:45:50.421420 ignition[1029]: Stage: kargs Jul 6 23:45:50.421589 ignition[1029]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:45:50.421596 ignition[1029]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:45:50.428647 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:45:50.422489 ignition[1029]: kargs: kargs passed Jul 6 23:45:50.433928 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:45:50.422530 ignition[1029]: Ignition finished successfully Jul 6 23:45:50.457349 ignition[1035]: Ignition 2.21.0 Jul 6 23:45:50.457359 ignition[1035]: Stage: disks Jul 6 23:45:50.460732 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:45:50.457534 ignition[1035]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:45:50.466673 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:45:50.457541 ignition[1035]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:45:50.471258 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:45:50.458316 ignition[1035]: disks: disks passed Jul 6 23:45:50.480005 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:45:50.458348 ignition[1035]: Ignition finished successfully Jul 6 23:45:50.486965 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:45:50.494955 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:45:50.503119 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:45:50.585820 systemd-fsck[1043]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jul 6 23:45:50.594073 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:45:50.604591 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:45:50.781754 kernel: EXT4-fs (sda9): mounted filesystem a6b10247-fbe6-4a25-95d9-ddd4b58604ec r/w with ordered data mode. Quota mode: none. Jul 6 23:45:50.782663 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:45:50.786534 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:45:50.808890 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:45:50.824237 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:45:50.831710 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 6 23:45:50.841413 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:45:50.866515 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1057) Jul 6 23:45:50.866536 kernel: BTRFS info (device sda6): first mount of filesystem 492b2e2a-5dd7-445f-b930-e9dd6acadf93 Jul 6 23:45:50.841443 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:45:50.883252 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:45:50.883270 kernel: BTRFS info (device sda6): using free-space-tree Jul 6 23:45:50.852942 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:45:50.877198 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:45:50.879373 systemd-networkd[1012]: enP53095s1: Gained IPv6LL Jul 6 23:45:50.903992 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:45:51.124871 systemd-networkd[1012]: eth0: Gained IPv6LL Jul 6 23:45:51.496331 coreos-metadata[1059]: Jul 06 23:45:51.496 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 6 23:45:51.505211 coreos-metadata[1059]: Jul 06 23:45:51.505 INFO Fetch successful Jul 6 23:45:51.509279 coreos-metadata[1059]: Jul 06 23:45:51.505 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 6 23:45:51.518047 coreos-metadata[1059]: Jul 06 23:45:51.517 INFO Fetch successful Jul 6 23:45:51.537349 coreos-metadata[1059]: Jul 06 23:45:51.537 INFO wrote hostname ci-4344.1.1-a-ba147b1783 to /sysroot/etc/hostname Jul 6 23:45:51.544583 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:45:51.752358 initrd-setup-root[1087]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:45:51.787283 initrd-setup-root[1094]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:45:51.808376 initrd-setup-root[1101]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:45:51.814547 initrd-setup-root[1108]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:45:52.658511 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:45:52.663667 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:45:52.681295 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:45:52.693671 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:45:52.701286 kernel: BTRFS info (device sda6): last unmount of filesystem 492b2e2a-5dd7-445f-b930-e9dd6acadf93 Jul 6 23:45:52.710780 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:45:52.721466 ignition[1176]: INFO : Ignition 2.21.0 Jul 6 23:45:52.721466 ignition[1176]: INFO : Stage: mount Jul 6 23:45:52.727732 ignition[1176]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:45:52.727732 ignition[1176]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:45:52.727732 ignition[1176]: INFO : mount: mount passed Jul 6 23:45:52.727732 ignition[1176]: INFO : Ignition finished successfully Jul 6 23:45:52.725544 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:45:52.732232 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:45:52.759846 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:45:52.784416 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1188) Jul 6 23:45:52.784449 kernel: BTRFS info (device sda6): first mount of filesystem 492b2e2a-5dd7-445f-b930-e9dd6acadf93 Jul 6 23:45:52.788582 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:45:52.791490 kernel: BTRFS info (device sda6): using free-space-tree Jul 6 23:45:52.793606 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:45:52.817770 ignition[1206]: INFO : Ignition 2.21.0 Jul 6 23:45:52.817770 ignition[1206]: INFO : Stage: files Jul 6 23:45:52.824110 ignition[1206]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:45:52.824110 ignition[1206]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:45:52.824110 ignition[1206]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:45:52.848248 ignition[1206]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:45:52.848248 ignition[1206]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:45:52.883015 ignition[1206]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:45:52.888078 ignition[1206]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:45:52.888078 ignition[1206]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:45:52.883429 unknown[1206]: wrote ssh authorized keys file for user: core Jul 6 23:45:52.903773 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 6 23:45:52.911042 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 6 23:45:52.968519 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:45:53.073932 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 6 23:45:53.081459 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:45:53.081459 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:45:53.081459 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:45:53.081459 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:45:53.081459 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:45:53.081459 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:45:53.081459 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:45:53.081459 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:45:53.137760 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:45:53.137760 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:45:53.137760 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 6 23:45:53.137760 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 6 23:45:53.137760 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 6 23:45:53.137760 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 6 23:45:53.842564 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 6 23:45:54.051403 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 6 23:45:54.051403 ignition[1206]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 6 23:45:54.131778 ignition[1206]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:45:54.142943 ignition[1206]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:45:54.142943 ignition[1206]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 6 23:45:54.142943 ignition[1206]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:45:54.169629 ignition[1206]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:45:54.169629 ignition[1206]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:45:54.169629 ignition[1206]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:45:54.169629 ignition[1206]: INFO : files: files passed Jul 6 23:45:54.169629 ignition[1206]: INFO : Ignition finished successfully Jul 6 23:45:54.151664 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:45:54.160386 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:45:54.191352 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:45:54.201009 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:45:54.213860 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:45:54.228526 initrd-setup-root-after-ignition[1233]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:45:54.228526 initrd-setup-root-after-ignition[1233]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:45:54.225146 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:45:54.256768 initrd-setup-root-after-ignition[1237]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:45:54.233625 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:45:54.244768 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:45:54.284118 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:45:54.284205 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:45:54.292979 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:45:54.301186 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:45:54.308874 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:45:54.309544 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:45:54.343853 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:45:54.350334 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:45:54.374586 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:45:54.379597 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:45:54.388533 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:45:54.397763 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:45:54.397867 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:45:54.410421 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:45:54.418720 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:45:54.426563 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:45:54.434159 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:45:54.442942 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:45:54.451823 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 6 23:45:54.460776 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:45:54.468778 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:45:54.478071 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:45:54.487390 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:45:54.494731 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:45:54.501436 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:45:54.501582 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:45:54.512053 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:45:54.520250 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:45:54.528939 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:45:54.533144 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:45:54.538018 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:45:54.538157 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:45:54.550528 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:45:54.550668 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:45:54.558544 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:45:54.558654 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:45:54.566222 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 6 23:45:54.566322 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:45:54.576836 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:45:54.623027 ignition[1258]: INFO : Ignition 2.21.0 Jul 6 23:45:54.623027 ignition[1258]: INFO : Stage: umount Jul 6 23:45:54.623027 ignition[1258]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:45:54.623027 ignition[1258]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:45:54.623027 ignition[1258]: INFO : umount: umount passed Jul 6 23:45:54.623027 ignition[1258]: INFO : Ignition finished successfully Jul 6 23:45:54.588698 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:45:54.588859 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:45:54.598834 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:45:54.613449 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:45:54.613595 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:45:54.619485 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:45:54.619564 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:45:54.631436 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:45:54.631522 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:45:54.643110 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:45:54.643210 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:45:54.654320 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:45:54.654371 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:45:54.661253 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 6 23:45:54.661292 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 6 23:45:54.670216 systemd[1]: Stopped target network.target - Network. Jul 6 23:45:54.677895 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:45:54.677947 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:45:54.689744 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:45:54.699803 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:45:54.704408 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:45:54.714131 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:45:54.723682 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:45:54.732790 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:45:54.732852 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:45:54.742207 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:45:54.742254 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:45:54.750993 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:45:54.751049 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:45:54.758779 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:45:54.758811 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:45:54.767382 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:45:54.776755 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:45:54.786078 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:45:54.786552 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:45:54.786632 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:45:54.795462 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:45:54.795548 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:45:54.809317 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 6 23:45:54.809681 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:45:54.809786 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:45:54.826043 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 6 23:45:54.826250 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:45:54.826322 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:45:54.836778 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 6 23:45:55.033514 kernel: hv_netvsc 000d3a06-3b5e-000d-3a06-3b5e000d3a06 eth0: Data path switched from VF: enP53095s1 Jul 6 23:45:54.845697 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:45:54.845746 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:45:54.854384 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:45:54.854440 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:45:54.863515 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:45:54.876356 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:45:54.876420 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:45:54.881863 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:45:54.881906 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:45:54.892787 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:45:54.892822 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:45:54.897289 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:45:54.897325 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:45:54.910626 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:45:54.921469 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 6 23:45:54.921525 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:45:54.935992 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:45:54.954956 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:45:54.968388 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:45:54.968430 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:45:54.977081 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:45:54.977121 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:45:54.987268 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:45:54.987321 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:45:54.999958 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:45:55.000012 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:45:55.020005 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:45:55.020056 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:45:55.034225 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:45:55.050149 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 6 23:45:55.050210 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:45:55.060444 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:45:55.060487 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:45:55.071834 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:45:55.071880 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:45:55.258622 systemd-journald[224]: Received SIGTERM from PID 1 (systemd). Jul 6 23:45:55.082442 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 6 23:45:55.082487 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 6 23:45:55.082512 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:45:55.082730 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:45:55.082826 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:45:55.125438 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:45:55.125576 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:45:55.135832 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:45:55.146308 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:45:55.166546 systemd[1]: Switching root. Jul 6 23:45:55.308345 systemd-journald[224]: Journal stopped Jul 6 23:45:58.965558 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:45:58.965576 kernel: SELinux: policy capability open_perms=1 Jul 6 23:45:58.965584 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:45:58.965589 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:45:58.965595 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:45:58.965600 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:45:58.965606 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:45:58.965611 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:45:58.965616 kernel: SELinux: policy capability userspace_initial_context=0 Jul 6 23:45:58.965621 kernel: audit: type=1403 audit(1751845556.178:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:45:58.965628 systemd[1]: Successfully loaded SELinux policy in 144.528ms. Jul 6 23:45:58.965635 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.817ms. Jul 6 23:45:58.965644 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:45:58.965649 systemd[1]: Detected virtualization microsoft. Jul 6 23:45:58.965656 systemd[1]: Detected architecture arm64. Jul 6 23:45:58.965663 systemd[1]: Detected first boot. Jul 6 23:45:58.965669 systemd[1]: Hostname set to . Jul 6 23:45:58.965675 systemd[1]: Initializing machine ID from random generator. Jul 6 23:45:58.965681 zram_generator::config[1301]: No configuration found. Jul 6 23:45:58.965687 kernel: NET: Registered PF_VSOCK protocol family Jul 6 23:45:58.965692 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:45:58.965699 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 6 23:45:58.965705 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:45:58.965711 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:45:58.965717 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:45:58.965723 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:45:58.965729 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:45:58.965735 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:45:58.966249 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:45:58.966265 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:45:58.966273 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:45:58.966283 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:45:58.966289 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:45:58.966296 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:45:58.966302 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:45:58.966308 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:45:58.966314 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:45:58.966320 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:45:58.966328 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:45:58.966334 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 6 23:45:58.966342 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:45:58.966348 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:45:58.966397 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:45:58.966404 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:45:58.966410 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:45:58.966418 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:45:58.966424 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:45:58.966430 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:45:58.966436 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:45:58.966442 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:45:58.966448 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:45:58.966454 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:45:58.966463 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 6 23:45:58.966469 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:45:58.966475 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:45:58.966481 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:45:58.966488 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:45:58.966494 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:45:58.966501 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:45:58.966507 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:45:58.966513 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:45:58.966519 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:45:58.966525 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:45:58.966532 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:45:58.966538 systemd[1]: Reached target machines.target - Containers. Jul 6 23:45:58.966545 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:45:58.966552 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:45:58.966558 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:45:58.966564 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:45:58.966570 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:45:58.966576 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:45:58.966583 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:45:58.966589 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:45:58.966596 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:45:58.966602 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:45:58.966609 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:45:58.966615 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:45:58.966621 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:45:58.966627 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:45:58.966634 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:45:58.966640 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:45:58.966646 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:45:58.966652 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:45:58.966659 kernel: fuse: init (API version 7.41) Jul 6 23:45:58.966665 kernel: ACPI: bus type drm_connector registered Jul 6 23:45:58.966671 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:45:58.966677 kernel: loop: module loaded Jul 6 23:45:58.966683 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 6 23:45:58.966689 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:45:58.966695 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:45:58.966724 systemd-journald[1405]: Collecting audit messages is disabled. Jul 6 23:45:58.967375 systemd[1]: Stopped verity-setup.service. Jul 6 23:45:58.967397 systemd-journald[1405]: Journal started Jul 6 23:45:58.967420 systemd-journald[1405]: Runtime Journal (/run/log/journal/45115a50bb884f88b9e50b49c18e1359) is 8M, max 78.5M, 70.5M free. Jul 6 23:45:58.245944 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:45:58.252163 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 6 23:45:58.252530 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:45:58.252802 systemd[1]: systemd-journald.service: Consumed 2.419s CPU time. Jul 6 23:45:58.981904 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:45:58.982530 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:45:58.987038 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:45:58.993725 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:45:58.997920 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:45:59.002672 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:45:59.007590 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:45:59.011887 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:45:59.016939 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:45:59.022394 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:45:59.022509 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:45:59.027771 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:45:59.027898 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:45:59.033184 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:45:59.033307 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:45:59.038182 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:45:59.038295 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:45:59.043766 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:45:59.043894 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:45:59.048513 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:45:59.048617 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:45:59.053917 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:45:59.059588 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:45:59.065323 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:45:59.070621 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 6 23:45:59.075714 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:45:59.089933 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:45:59.095486 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:45:59.105216 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:45:59.112616 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:45:59.112649 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:45:59.117938 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 6 23:45:59.124068 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:45:59.128382 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:45:59.129335 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:45:59.135862 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:45:59.140467 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:45:59.141332 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:45:59.146428 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:45:59.147202 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:45:59.154852 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:45:59.162965 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:45:59.168929 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:45:59.175223 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:45:59.185777 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:45:59.192004 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:45:59.198871 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 6 23:45:59.199399 systemd-journald[1405]: Time spent on flushing to /var/log/journal/45115a50bb884f88b9e50b49c18e1359 is 53.086ms for 939 entries. Jul 6 23:45:59.199399 systemd-journald[1405]: System Journal (/var/log/journal/45115a50bb884f88b9e50b49c18e1359) is 11.8M, max 2.6G, 2.6G free. Jul 6 23:45:59.333178 systemd-journald[1405]: Received client request to flush runtime journal. Jul 6 23:45:59.333221 systemd-journald[1405]: /var/log/journal/45115a50bb884f88b9e50b49c18e1359/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jul 6 23:45:59.333243 systemd-journald[1405]: Rotating system journal. Jul 6 23:45:59.333259 kernel: loop0: detected capacity change from 0 to 138376 Jul 6 23:45:59.232959 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:45:59.308402 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:45:59.313947 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:45:59.334758 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:45:59.348400 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:45:59.349726 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 6 23:45:59.403201 systemd-tmpfiles[1453]: ACLs are not supported, ignoring. Jul 6 23:45:59.403213 systemd-tmpfiles[1453]: ACLs are not supported, ignoring. Jul 6 23:45:59.406716 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:45:59.691863 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:45:59.721762 kernel: loop1: detected capacity change from 0 to 28936 Jul 6 23:46:00.070768 kernel: loop2: detected capacity change from 0 to 107312 Jul 6 23:46:00.192394 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:46:00.199350 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:46:00.230040 systemd-udevd[1464]: Using default interface naming scheme 'v255'. Jul 6 23:46:00.391764 kernel: loop3: detected capacity change from 0 to 207008 Jul 6 23:46:00.408761 kernel: loop4: detected capacity change from 0 to 138376 Jul 6 23:46:00.415751 kernel: loop5: detected capacity change from 0 to 28936 Jul 6 23:46:00.421774 kernel: loop6: detected capacity change from 0 to 107312 Jul 6 23:46:00.428754 kernel: loop7: detected capacity change from 0 to 207008 Jul 6 23:46:00.431670 (sd-merge)[1467]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 6 23:46:00.432056 (sd-merge)[1467]: Merged extensions into '/usr'. Jul 6 23:46:00.435191 systemd[1]: Reload requested from client PID 1440 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:46:00.435289 systemd[1]: Reloading... Jul 6 23:46:00.521766 zram_generator::config[1519]: No configuration found. Jul 6 23:46:00.644998 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:46:00.659775 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#275 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 6 23:46:00.731757 kernel: mousedev: PS/2 mouse device common for all mice Jul 6 23:46:00.731840 kernel: hv_vmbus: registering driver hv_balloon Jul 6 23:46:00.755430 kernel: hv_vmbus: registering driver hyperv_fb Jul 6 23:46:00.755509 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 6 23:46:00.752129 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 6 23:46:00.752445 systemd[1]: Reloading finished in 316 ms. Jul 6 23:46:00.760583 kernel: hv_balloon: Memory hot add disabled on ARM64 Jul 6 23:46:00.765536 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:46:00.773336 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:46:00.792918 systemd[1]: Starting ensure-sysext.service... Jul 6 23:46:00.794490 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 6 23:46:00.797424 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 6 23:46:00.807542 kernel: Console: switching to colour dummy device 80x25 Jul 6 23:46:00.810760 kernel: Console: switching to colour frame buffer device 128x48 Jul 6 23:46:00.827921 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:46:00.837662 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:46:00.859180 systemd[1]: Reload requested from client PID 1612 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:46:00.859196 systemd[1]: Reloading... Jul 6 23:46:00.864793 systemd-tmpfiles[1615]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 6 23:46:00.867834 systemd-tmpfiles[1615]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 6 23:46:00.868168 systemd-tmpfiles[1615]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:46:00.869767 systemd-tmpfiles[1615]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:46:00.871302 systemd-tmpfiles[1615]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:46:00.873197 systemd-tmpfiles[1615]: ACLs are not supported, ignoring. Jul 6 23:46:00.878848 systemd-tmpfiles[1615]: ACLs are not supported, ignoring. Jul 6 23:46:00.882392 systemd-tmpfiles[1615]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:46:00.885242 systemd-tmpfiles[1615]: Skipping /boot Jul 6 23:46:00.911146 systemd-tmpfiles[1615]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:46:00.917870 zram_generator::config[1651]: No configuration found. Jul 6 23:46:00.913774 systemd-tmpfiles[1615]: Skipping /boot Jul 6 23:46:01.014767 kernel: MACsec IEEE 802.1AE Jul 6 23:46:01.041314 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:46:01.120675 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 6 23:46:01.126343 systemd[1]: Reloading finished in 266 ms. Jul 6 23:46:01.138867 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:46:01.167269 systemd[1]: Finished ensure-sysext.service. Jul 6 23:46:01.184644 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:46:01.193876 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:46:01.199494 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:46:01.208672 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:46:01.215234 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:46:01.220792 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:46:01.227318 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:46:01.232971 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:46:01.233847 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:46:01.239823 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:46:01.241071 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:46:01.254812 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:46:01.260927 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:46:01.268258 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:46:01.278787 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:46:01.285865 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:46:01.293892 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:46:01.294895 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:46:01.303431 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:46:01.303575 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:46:01.311088 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:46:01.311204 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:46:01.318157 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:46:01.318291 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:46:01.322622 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:46:01.329612 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:46:01.336394 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:46:01.349267 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:46:01.354527 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:46:01.354574 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:46:01.369503 augenrules[1806]: No rules Jul 6 23:46:01.372778 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:46:01.373341 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:46:01.433077 systemd-resolved[1772]: Positive Trust Anchors: Jul 6 23:46:01.433412 systemd-resolved[1772]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:46:01.433490 systemd-resolved[1772]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:46:01.436300 systemd-resolved[1772]: Using system hostname 'ci-4344.1.1-a-ba147b1783'. Jul 6 23:46:01.437888 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:46:01.442546 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:46:01.496135 systemd-networkd[1614]: lo: Link UP Jul 6 23:46:01.496142 systemd-networkd[1614]: lo: Gained carrier Jul 6 23:46:01.497673 systemd-networkd[1614]: Enumeration completed Jul 6 23:46:01.497771 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:46:01.498056 systemd-networkd[1614]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:46:01.498061 systemd-networkd[1614]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:46:01.503767 systemd[1]: Reached target network.target - Network. Jul 6 23:46:01.511900 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 6 23:46:01.523868 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:46:01.550754 kernel: mlx5_core cf67:00:02.0 enP53095s1: Link up Jul 6 23:46:01.571760 kernel: hv_netvsc 000d3a06-3b5e-000d-3a06-3b5e000d3a06 eth0: Data path switched to VF: enP53095s1 Jul 6 23:46:01.573447 systemd-networkd[1614]: enP53095s1: Link UP Jul 6 23:46:01.574365 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 6 23:46:01.574519 systemd-networkd[1614]: eth0: Link UP Jul 6 23:46:01.574525 systemd-networkd[1614]: eth0: Gained carrier Jul 6 23:46:01.574542 systemd-networkd[1614]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:46:01.582405 systemd-networkd[1614]: enP53095s1: Gained carrier Jul 6 23:46:01.590768 systemd-networkd[1614]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 6 23:46:01.598799 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:46:01.604164 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:46:01.644816 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:46:02.644868 systemd-networkd[1614]: eth0: Gained IPv6LL Jul 6 23:46:02.647502 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:46:02.652697 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:46:03.540912 systemd-networkd[1614]: enP53095s1: Gained IPv6LL Jul 6 23:46:04.004826 ldconfig[1435]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:46:04.018020 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:46:04.025933 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:46:04.040593 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:46:04.046823 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:46:04.053123 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:46:04.060363 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:46:04.069420 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:46:04.076157 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:46:04.083601 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:46:04.089986 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:46:04.090010 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:46:04.094616 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:46:04.099826 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:46:04.105881 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:46:04.111520 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 6 23:46:04.116545 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 6 23:46:04.121721 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 6 23:46:04.140339 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:46:04.145051 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 6 23:46:04.150365 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:46:04.154472 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:46:04.158083 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:46:04.161757 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:46:04.161776 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:46:04.163423 systemd[1]: Starting chronyd.service - NTP client/server... Jul 6 23:46:04.176835 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:46:04.182923 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 6 23:46:04.188867 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:46:04.195185 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:46:04.202355 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:46:04.207867 (chronyd)[1826]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 6 23:46:04.209586 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:46:04.213525 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:46:04.220758 jq[1834]: false Jul 6 23:46:04.219475 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jul 6 23:46:04.223936 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jul 6 23:46:04.224819 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:46:04.230006 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:46:04.243853 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:46:04.248397 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:46:04.253875 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:46:04.260532 KVP[1836]: KVP starting; pid is:1836 Jul 6 23:46:04.263234 chronyd[1849]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 6 23:46:04.270765 kernel: hv_utils: KVP IC version 4.0 Jul 6 23:46:04.263594 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:46:04.267916 KVP[1836]: KVP LIC Version: 3.1 Jul 6 23:46:04.274518 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:46:04.279304 extend-filesystems[1835]: Found /dev/sda6 Jul 6 23:46:04.283385 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:46:04.283677 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:46:04.284167 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:46:04.292987 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:46:04.297824 chronyd[1849]: Timezone right/UTC failed leap second check, ignoring Jul 6 23:46:04.297964 chronyd[1849]: Loaded seccomp filter (level 2) Jul 6 23:46:04.303775 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:46:04.307752 jq[1863]: true Jul 6 23:46:04.313689 systemd[1]: Started chronyd.service - NTP client/server. Jul 6 23:46:04.318917 extend-filesystems[1835]: Found /dev/sda9 Jul 6 23:46:04.326863 extend-filesystems[1835]: Checking size of /dev/sda9 Jul 6 23:46:04.319342 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:46:04.319503 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:46:04.321982 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:46:04.322143 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:46:04.338446 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:46:04.348288 update_engine[1862]: I20250706 23:46:04.347417 1862 main.cc:92] Flatcar Update Engine starting Jul 6 23:46:04.349003 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:46:04.349343 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:46:04.373542 jq[1873]: true Jul 6 23:46:04.375355 systemd-logind[1858]: New seat seat0. Jul 6 23:46:04.379973 systemd-logind[1858]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 6 23:46:04.380125 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:46:04.385507 extend-filesystems[1835]: Old size kept for /dev/sda9 Jul 6 23:46:04.387614 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:46:04.396006 (ntainerd)[1874]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:46:04.396213 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:46:04.410940 tar[1872]: linux-arm64/LICENSE Jul 6 23:46:04.411148 tar[1872]: linux-arm64/helm Jul 6 23:46:04.513848 dbus-daemon[1829]: [system] SELinux support is enabled Jul 6 23:46:04.514238 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:46:04.518499 update_engine[1862]: I20250706 23:46:04.518450 1862 update_check_scheduler.cc:74] Next update check in 7m35s Jul 6 23:46:04.523274 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:46:04.523495 dbus-daemon[1829]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 6 23:46:04.523302 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:46:04.532093 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:46:04.532206 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:46:04.541661 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:46:04.549164 bash[1916]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:46:04.573056 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:46:04.579369 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:46:04.588361 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 6 23:46:04.617044 coreos-metadata[1828]: Jul 06 23:46:04.616 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 6 23:46:04.623574 coreos-metadata[1828]: Jul 06 23:46:04.623 INFO Fetch successful Jul 6 23:46:04.623574 coreos-metadata[1828]: Jul 06 23:46:04.623 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 6 23:46:04.631084 coreos-metadata[1828]: Jul 06 23:46:04.630 INFO Fetch successful Jul 6 23:46:04.631084 coreos-metadata[1828]: Jul 06 23:46:04.631 INFO Fetching http://168.63.129.16/machine/0a6f9a74-c51e-49a1-882c-b9e3621e1d30/d26d89d7%2Dc177%2D43b2%2Db0b6%2Dba0734597d24.%5Fci%2D4344.1.1%2Da%2Dba147b1783?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 6 23:46:04.638081 coreos-metadata[1828]: Jul 06 23:46:04.637 INFO Fetch successful Jul 6 23:46:04.638081 coreos-metadata[1828]: Jul 06 23:46:04.638 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 6 23:46:04.647841 coreos-metadata[1828]: Jul 06 23:46:04.647 INFO Fetch successful Jul 6 23:46:04.689090 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 6 23:46:04.697330 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:46:04.945201 locksmithd[1938]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:46:04.957256 sshd_keygen[1860]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:46:04.975853 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:46:04.985400 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:46:04.995730 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 6 23:46:05.015654 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:46:05.018676 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:46:05.029275 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:46:05.043667 containerd[1874]: time="2025-07-06T23:46:05Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 6 23:46:05.045758 containerd[1874]: time="2025-07-06T23:46:05.044461784Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 6 23:46:05.052077 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 6 23:46:05.057331 containerd[1874]: time="2025-07-06T23:46:05.056568488Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.016µs" Jul 6 23:46:05.057331 containerd[1874]: time="2025-07-06T23:46:05.056594928Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 6 23:46:05.057331 containerd[1874]: time="2025-07-06T23:46:05.056610008Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 6 23:46:05.057331 containerd[1874]: time="2025-07-06T23:46:05.056751024Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 6 23:46:05.057331 containerd[1874]: time="2025-07-06T23:46:05.056765576Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 6 23:46:05.057331 containerd[1874]: time="2025-07-06T23:46:05.056783296Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 6 23:46:05.057331 containerd[1874]: time="2025-07-06T23:46:05.056821232Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 6 23:46:05.057331 containerd[1874]: time="2025-07-06T23:46:05.056828384Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 6 23:46:05.057331 containerd[1874]: time="2025-07-06T23:46:05.056991488Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 6 23:46:05.057331 containerd[1874]: time="2025-07-06T23:46:05.057004496Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 6 23:46:05.057331 containerd[1874]: time="2025-07-06T23:46:05.057011024Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 6 23:46:05.057331 containerd[1874]: time="2025-07-06T23:46:05.057016280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 6 23:46:05.057513 containerd[1874]: time="2025-07-06T23:46:05.057085008Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 6 23:46:05.057513 containerd[1874]: time="2025-07-06T23:46:05.057222336Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 6 23:46:05.057513 containerd[1874]: time="2025-07-06T23:46:05.057242136Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 6 23:46:05.057513 containerd[1874]: time="2025-07-06T23:46:05.057247944Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 6 23:46:05.059556 containerd[1874]: time="2025-07-06T23:46:05.059522632Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 6 23:46:05.064467 containerd[1874]: time="2025-07-06T23:46:05.062099152Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 6 23:46:05.064467 containerd[1874]: time="2025-07-06T23:46:05.062185976Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:46:05.066793 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:46:05.076966 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:46:05.085882 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 6 23:46:05.092214 containerd[1874]: time="2025-07-06T23:46:05.091514464Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 6 23:46:05.092214 containerd[1874]: time="2025-07-06T23:46:05.091564984Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 6 23:46:05.092214 containerd[1874]: time="2025-07-06T23:46:05.091576536Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 6 23:46:05.092214 containerd[1874]: time="2025-07-06T23:46:05.091585152Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 6 23:46:05.092214 containerd[1874]: time="2025-07-06T23:46:05.091593760Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 6 23:46:05.092214 containerd[1874]: time="2025-07-06T23:46:05.091604000Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 6 23:46:05.092214 containerd[1874]: time="2025-07-06T23:46:05.091612144Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 6 23:46:05.092214 containerd[1874]: time="2025-07-06T23:46:05.091619440Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 6 23:46:05.092214 containerd[1874]: time="2025-07-06T23:46:05.091629888Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 6 23:46:05.092214 containerd[1874]: time="2025-07-06T23:46:05.091635856Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 6 23:46:05.092214 containerd[1874]: time="2025-07-06T23:46:05.091642408Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 6 23:46:05.092214 containerd[1874]: time="2025-07-06T23:46:05.091650384Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 6 23:46:05.092214 containerd[1874]: time="2025-07-06T23:46:05.091768720Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 6 23:46:05.092214 containerd[1874]: time="2025-07-06T23:46:05.091785480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 6 23:46:05.092383 containerd[1874]: time="2025-07-06T23:46:05.091802200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 6 23:46:05.092383 containerd[1874]: time="2025-07-06T23:46:05.091808904Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 6 23:46:05.092383 containerd[1874]: time="2025-07-06T23:46:05.091815952Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 6 23:46:05.092383 containerd[1874]: time="2025-07-06T23:46:05.091825200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 6 23:46:05.092383 containerd[1874]: time="2025-07-06T23:46:05.091832976Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 6 23:46:05.092383 containerd[1874]: time="2025-07-06T23:46:05.091841944Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 6 23:46:05.092383 containerd[1874]: time="2025-07-06T23:46:05.091848704Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 6 23:46:05.092383 containerd[1874]: time="2025-07-06T23:46:05.091854776Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 6 23:46:05.092383 containerd[1874]: time="2025-07-06T23:46:05.091860464Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 6 23:46:05.092383 containerd[1874]: time="2025-07-06T23:46:05.091919048Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 6 23:46:05.092383 containerd[1874]: time="2025-07-06T23:46:05.091928568Z" level=info msg="Start snapshots syncer" Jul 6 23:46:05.092383 containerd[1874]: time="2025-07-06T23:46:05.091945944Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 6 23:46:05.092785 containerd[1874]: time="2025-07-06T23:46:05.092751984Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 6 23:46:05.092785 containerd[1874]: time="2025-07-06T23:46:05.092863264Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 6 23:46:05.093155 containerd[1874]: time="2025-07-06T23:46:05.093078136Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 6 23:46:05.093388 containerd[1874]: time="2025-07-06T23:46:05.093371088Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 6 23:46:05.093510 containerd[1874]: time="2025-07-06T23:46:05.093447328Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 6 23:46:05.093510 containerd[1874]: time="2025-07-06T23:46:05.093460136Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 6 23:46:05.093510 containerd[1874]: time="2025-07-06T23:46:05.093476176Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 6 23:46:05.093510 containerd[1874]: time="2025-07-06T23:46:05.093483784Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 6 23:46:05.093510 containerd[1874]: time="2025-07-06T23:46:05.093491432Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 6 23:46:05.093510 containerd[1874]: time="2025-07-06T23:46:05.093497752Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 6 23:46:05.093811 containerd[1874]: time="2025-07-06T23:46:05.093702544Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 6 23:46:05.093811 containerd[1874]: time="2025-07-06T23:46:05.093765304Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 6 23:46:05.093811 containerd[1874]: time="2025-07-06T23:46:05.093777584Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 6 23:46:05.093960 containerd[1874]: time="2025-07-06T23:46:05.093917152Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 6 23:46:05.093960 containerd[1874]: time="2025-07-06T23:46:05.093936088Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 6 23:46:05.093960 containerd[1874]: time="2025-07-06T23:46:05.093941672Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 6 23:46:05.093960 containerd[1874]: time="2025-07-06T23:46:05.093947288Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 6 23:46:05.094134 containerd[1874]: time="2025-07-06T23:46:05.093951904Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 6 23:46:05.094134 containerd[1874]: time="2025-07-06T23:46:05.094112784Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 6 23:46:05.094268 containerd[1874]: time="2025-07-06T23:46:05.094121960Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 6 23:46:05.094268 containerd[1874]: time="2025-07-06T23:46:05.094209192Z" level=info msg="runtime interface created" Jul 6 23:46:05.094268 containerd[1874]: time="2025-07-06T23:46:05.094215592Z" level=info msg="created NRI interface" Jul 6 23:46:05.094268 containerd[1874]: time="2025-07-06T23:46:05.094223912Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 6 23:46:05.094268 containerd[1874]: time="2025-07-06T23:46:05.094233384Z" level=info msg="Connect containerd service" Jul 6 23:46:05.094385 containerd[1874]: time="2025-07-06T23:46:05.094358776Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:46:05.095133 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:46:05.099141 containerd[1874]: time="2025-07-06T23:46:05.096687952Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:46:05.126636 tar[1872]: linux-arm64/README.md Jul 6 23:46:05.136859 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:46:05.198019 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:46:05.203365 (kubelet)[2022]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:46:05.439955 kubelet[2022]: E0706 23:46:05.439889 2022 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:46:05.441975 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:46:05.442088 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:46:05.442422 systemd[1]: kubelet.service: Consumed 542ms CPU time, 253.8M memory peak. Jul 6 23:46:05.855829 containerd[1874]: time="2025-07-06T23:46:05.855769272Z" level=info msg="Start subscribing containerd event" Jul 6 23:46:05.855829 containerd[1874]: time="2025-07-06T23:46:05.855832400Z" level=info msg="Start recovering state" Jul 6 23:46:05.856065 containerd[1874]: time="2025-07-06T23:46:05.855909624Z" level=info msg="Start event monitor" Jul 6 23:46:05.856065 containerd[1874]: time="2025-07-06T23:46:05.855920744Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:46:05.856065 containerd[1874]: time="2025-07-06T23:46:05.855926592Z" level=info msg="Start streaming server" Jul 6 23:46:05.856065 containerd[1874]: time="2025-07-06T23:46:05.855934792Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 6 23:46:05.856065 containerd[1874]: time="2025-07-06T23:46:05.855939512Z" level=info msg="runtime interface starting up..." Jul 6 23:46:05.856065 containerd[1874]: time="2025-07-06T23:46:05.855943152Z" level=info msg="starting plugins..." Jul 6 23:46:05.856065 containerd[1874]: time="2025-07-06T23:46:05.855955360Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 6 23:46:05.856359 containerd[1874]: time="2025-07-06T23:46:05.856246096Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:46:05.856359 containerd[1874]: time="2025-07-06T23:46:05.856292704Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:46:05.856359 containerd[1874]: time="2025-07-06T23:46:05.856348080Z" level=info msg="containerd successfully booted in 0.812976s" Jul 6 23:46:05.856558 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:46:05.862343 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:46:05.871808 systemd[1]: Startup finished in 1.638s (kernel) + 11.494s (initrd) + 9.837s (userspace) = 22.970s. Jul 6 23:46:06.126964 login[2009]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jul 6 23:46:06.128169 login[2010]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:46:06.215830 systemd-logind[1858]: New session 2 of user core. Jul 6 23:46:06.216792 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:46:06.218335 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:46:06.234055 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:46:06.236787 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:46:06.250193 (systemd)[2044]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:46:06.254349 systemd-logind[1858]: New session c1 of user core. Jul 6 23:46:06.413347 systemd[2044]: Queued start job for default target default.target. Jul 6 23:46:06.425685 systemd[2044]: Created slice app.slice - User Application Slice. Jul 6 23:46:06.425834 systemd[2044]: Reached target paths.target - Paths. Jul 6 23:46:06.425873 systemd[2044]: Reached target timers.target - Timers. Jul 6 23:46:06.426832 systemd[2044]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:46:06.433395 systemd[2044]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:46:06.433435 systemd[2044]: Reached target sockets.target - Sockets. Jul 6 23:46:06.433463 systemd[2044]: Reached target basic.target - Basic System. Jul 6 23:46:06.433483 systemd[2044]: Reached target default.target - Main User Target. Jul 6 23:46:06.433502 systemd[2044]: Startup finished in 175ms. Jul 6 23:46:06.434164 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:46:06.438855 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:46:06.712103 waagent[2005]: 2025-07-06T23:46:06.711985Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jul 6 23:46:06.716318 waagent[2005]: 2025-07-06T23:46:06.716280Z INFO Daemon Daemon OS: flatcar 4344.1.1 Jul 6 23:46:06.719655 waagent[2005]: 2025-07-06T23:46:06.719624Z INFO Daemon Daemon Python: 3.11.12 Jul 6 23:46:06.722890 waagent[2005]: 2025-07-06T23:46:06.722854Z INFO Daemon Daemon Run daemon Jul 6 23:46:06.725692 waagent[2005]: 2025-07-06T23:46:06.725660Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4344.1.1' Jul 6 23:46:06.731910 waagent[2005]: 2025-07-06T23:46:06.731883Z INFO Daemon Daemon Using waagent for provisioning Jul 6 23:46:06.736381 waagent[2005]: 2025-07-06T23:46:06.736352Z INFO Daemon Daemon Activate resource disk Jul 6 23:46:06.739689 waagent[2005]: 2025-07-06T23:46:06.739664Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 6 23:46:06.747555 waagent[2005]: 2025-07-06T23:46:06.747520Z INFO Daemon Daemon Found device: None Jul 6 23:46:06.750938 waagent[2005]: 2025-07-06T23:46:06.750910Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 6 23:46:06.756948 waagent[2005]: 2025-07-06T23:46:06.756926Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 6 23:46:06.764844 waagent[2005]: 2025-07-06T23:46:06.764808Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 6 23:46:06.769065 waagent[2005]: 2025-07-06T23:46:06.769038Z INFO Daemon Daemon Running default provisioning handler Jul 6 23:46:06.777872 waagent[2005]: 2025-07-06T23:46:06.777830Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 6 23:46:06.788203 waagent[2005]: 2025-07-06T23:46:06.788166Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 6 23:46:06.794855 waagent[2005]: 2025-07-06T23:46:06.794828Z INFO Daemon Daemon cloud-init is enabled: False Jul 6 23:46:06.798452 waagent[2005]: 2025-07-06T23:46:06.798431Z INFO Daemon Daemon Copying ovf-env.xml Jul 6 23:46:06.905049 waagent[2005]: 2025-07-06T23:46:06.904520Z INFO Daemon Daemon Successfully mounted dvd Jul 6 23:46:06.929701 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 6 23:46:06.931765 waagent[2005]: 2025-07-06T23:46:06.931399Z INFO Daemon Daemon Detect protocol endpoint Jul 6 23:46:06.935953 waagent[2005]: 2025-07-06T23:46:06.935916Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 6 23:46:06.942331 waagent[2005]: 2025-07-06T23:46:06.942300Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 6 23:46:06.948703 waagent[2005]: 2025-07-06T23:46:06.948676Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 6 23:46:06.952828 waagent[2005]: 2025-07-06T23:46:06.952795Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 6 23:46:06.956756 waagent[2005]: 2025-07-06T23:46:06.956723Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 6 23:46:07.002352 waagent[2005]: 2025-07-06T23:46:07.002322Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 6 23:46:07.007092 waagent[2005]: 2025-07-06T23:46:07.007074Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 6 23:46:07.010790 waagent[2005]: 2025-07-06T23:46:07.010768Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 6 23:46:07.127356 login[2009]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:46:07.132709 systemd-logind[1858]: New session 1 of user core. Jul 6 23:46:07.135843 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:46:07.141221 waagent[2005]: 2025-07-06T23:46:07.139681Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 6 23:46:07.144911 waagent[2005]: 2025-07-06T23:46:07.144860Z INFO Daemon Daemon Forcing an update of the goal state. Jul 6 23:46:07.152328 waagent[2005]: 2025-07-06T23:46:07.152290Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 6 23:46:07.169897 waagent[2005]: 2025-07-06T23:46:07.169862Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jul 6 23:46:07.173843 waagent[2005]: 2025-07-06T23:46:07.173816Z INFO Daemon Jul 6 23:46:07.175830 waagent[2005]: 2025-07-06T23:46:07.175806Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: cb3cf454-9743-4c29-821e-f05ad1aec494 eTag: 15598939463629434112 source: Fabric] Jul 6 23:46:07.185906 waagent[2005]: 2025-07-06T23:46:07.185876Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 6 23:46:07.190443 waagent[2005]: 2025-07-06T23:46:07.190415Z INFO Daemon Jul 6 23:46:07.192357 waagent[2005]: 2025-07-06T23:46:07.192336Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 6 23:46:07.200487 waagent[2005]: 2025-07-06T23:46:07.200459Z INFO Daemon Daemon Downloading artifacts profile blob Jul 6 23:46:07.265055 waagent[2005]: 2025-07-06T23:46:07.264965Z INFO Daemon Downloaded certificate {'thumbprint': 'EADD9C0CF53EB35FFBACAB9B5234A45E51F04152', 'hasPrivateKey': False} Jul 6 23:46:07.271628 waagent[2005]: 2025-07-06T23:46:07.271596Z INFO Daemon Downloaded certificate {'thumbprint': 'C084FE94F1396A4BE0ED65313A146D3E898F1CB7', 'hasPrivateKey': True} Jul 6 23:46:07.278974 waagent[2005]: 2025-07-06T23:46:07.278942Z INFO Daemon Fetch goal state completed Jul 6 23:46:07.288698 waagent[2005]: 2025-07-06T23:46:07.288671Z INFO Daemon Daemon Starting provisioning Jul 6 23:46:07.292742 waagent[2005]: 2025-07-06T23:46:07.292714Z INFO Daemon Daemon Handle ovf-env.xml. Jul 6 23:46:07.296040 waagent[2005]: 2025-07-06T23:46:07.296019Z INFO Daemon Daemon Set hostname [ci-4344.1.1-a-ba147b1783] Jul 6 23:46:07.301275 waagent[2005]: 2025-07-06T23:46:07.301232Z INFO Daemon Daemon Publish hostname [ci-4344.1.1-a-ba147b1783] Jul 6 23:46:07.306173 waagent[2005]: 2025-07-06T23:46:07.306138Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 6 23:46:07.310628 waagent[2005]: 2025-07-06T23:46:07.310600Z INFO Daemon Daemon Primary interface is [eth0] Jul 6 23:46:07.319581 systemd-networkd[1614]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:46:07.319586 systemd-networkd[1614]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:46:07.319624 systemd-networkd[1614]: eth0: DHCP lease lost Jul 6 23:46:07.320384 waagent[2005]: 2025-07-06T23:46:07.320342Z INFO Daemon Daemon Create user account if not exists Jul 6 23:46:07.327304 waagent[2005]: 2025-07-06T23:46:07.327265Z INFO Daemon Daemon User core already exists, skip useradd Jul 6 23:46:07.332950 waagent[2005]: 2025-07-06T23:46:07.332921Z INFO Daemon Daemon Configure sudoer Jul 6 23:46:07.343593 waagent[2005]: 2025-07-06T23:46:07.343551Z INFO Daemon Daemon Configure sshd Jul 6 23:46:07.349206 waagent[2005]: 2025-07-06T23:46:07.349167Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 6 23:46:07.357769 waagent[2005]: 2025-07-06T23:46:07.357741Z INFO Daemon Daemon Deploy ssh public key. Jul 6 23:46:07.362781 systemd-networkd[1614]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 6 23:46:08.478956 waagent[2005]: 2025-07-06T23:46:08.478914Z INFO Daemon Daemon Provisioning complete Jul 6 23:46:08.494271 waagent[2005]: 2025-07-06T23:46:08.494239Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 6 23:46:08.499122 waagent[2005]: 2025-07-06T23:46:08.499095Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 6 23:46:08.506450 waagent[2005]: 2025-07-06T23:46:08.506425Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jul 6 23:46:08.601005 waagent[2098]: 2025-07-06T23:46:08.600937Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jul 6 23:46:08.601267 waagent[2098]: 2025-07-06T23:46:08.601057Z INFO ExtHandler ExtHandler OS: flatcar 4344.1.1 Jul 6 23:46:08.601267 waagent[2098]: 2025-07-06T23:46:08.601094Z INFO ExtHandler ExtHandler Python: 3.11.12 Jul 6 23:46:08.601267 waagent[2098]: 2025-07-06T23:46:08.601126Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Jul 6 23:46:08.634490 waagent[2098]: 2025-07-06T23:46:08.634432Z INFO ExtHandler ExtHandler Distro: flatcar-4344.1.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jul 6 23:46:08.634620 waagent[2098]: 2025-07-06T23:46:08.634584Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 6 23:46:08.634637 waagent[2098]: 2025-07-06T23:46:08.634624Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 6 23:46:08.640831 waagent[2098]: 2025-07-06T23:46:08.640788Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 6 23:46:08.646160 waagent[2098]: 2025-07-06T23:46:08.646129Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jul 6 23:46:08.646500 waagent[2098]: 2025-07-06T23:46:08.646470Z INFO ExtHandler Jul 6 23:46:08.646548 waagent[2098]: 2025-07-06T23:46:08.646531Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: a9a50988-5d4d-48c9-aa3d-4ddc0f41b501 eTag: 15598939463629434112 source: Fabric] Jul 6 23:46:08.646779 waagent[2098]: 2025-07-06T23:46:08.646756Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 6 23:46:08.647174 waagent[2098]: 2025-07-06T23:46:08.647146Z INFO ExtHandler Jul 6 23:46:08.647208 waagent[2098]: 2025-07-06T23:46:08.647193Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 6 23:46:08.650931 waagent[2098]: 2025-07-06T23:46:08.650909Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 6 23:46:08.708108 waagent[2098]: 2025-07-06T23:46:08.708054Z INFO ExtHandler Downloaded certificate {'thumbprint': 'EADD9C0CF53EB35FFBACAB9B5234A45E51F04152', 'hasPrivateKey': False} Jul 6 23:46:08.708384 waagent[2098]: 2025-07-06T23:46:08.708354Z INFO ExtHandler Downloaded certificate {'thumbprint': 'C084FE94F1396A4BE0ED65313A146D3E898F1CB7', 'hasPrivateKey': True} Jul 6 23:46:08.708666 waagent[2098]: 2025-07-06T23:46:08.708639Z INFO ExtHandler Fetch goal state completed Jul 6 23:46:08.722406 waagent[2098]: 2025-07-06T23:46:08.722363Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) Jul 6 23:46:08.725591 waagent[2098]: 2025-07-06T23:46:08.725546Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2098 Jul 6 23:46:08.725682 waagent[2098]: 2025-07-06T23:46:08.725659Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 6 23:46:08.725925 waagent[2098]: 2025-07-06T23:46:08.725899Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jul 6 23:46:08.726959 waagent[2098]: 2025-07-06T23:46:08.726930Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4344.1.1', '', 'Flatcar Container Linux by Kinvolk'] Jul 6 23:46:08.727266 waagent[2098]: 2025-07-06T23:46:08.727238Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4344.1.1', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jul 6 23:46:08.727369 waagent[2098]: 2025-07-06T23:46:08.727349Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jul 6 23:46:08.727801 waagent[2098]: 2025-07-06T23:46:08.727773Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 6 23:46:08.759720 waagent[2098]: 2025-07-06T23:46:08.759691Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 6 23:46:08.759896 waagent[2098]: 2025-07-06T23:46:08.759868Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 6 23:46:08.764018 waagent[2098]: 2025-07-06T23:46:08.763993Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 6 23:46:08.768654 systemd[1]: Reload requested from client PID 2115 ('systemctl') (unit waagent.service)... Jul 6 23:46:08.768667 systemd[1]: Reloading... Jul 6 23:46:08.840850 zram_generator::config[2153]: No configuration found. Jul 6 23:46:08.897173 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:46:08.978701 systemd[1]: Reloading finished in 209 ms. Jul 6 23:46:08.995512 waagent[2098]: 2025-07-06T23:46:08.994883Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 6 23:46:08.995512 waagent[2098]: 2025-07-06T23:46:08.995013Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 6 23:46:10.353529 waagent[2098]: 2025-07-06T23:46:10.352780Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 6 23:46:10.353529 waagent[2098]: 2025-07-06T23:46:10.353092Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jul 6 23:46:10.353852 waagent[2098]: 2025-07-06T23:46:10.353722Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 6 23:46:10.353852 waagent[2098]: 2025-07-06T23:46:10.353805Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 6 23:46:10.353987 waagent[2098]: 2025-07-06T23:46:10.353956Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 6 23:46:10.354071 waagent[2098]: 2025-07-06T23:46:10.354030Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 6 23:46:10.354167 waagent[2098]: 2025-07-06T23:46:10.354141Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 6 23:46:10.354167 waagent[2098]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 6 23:46:10.354167 waagent[2098]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jul 6 23:46:10.354167 waagent[2098]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 6 23:46:10.354167 waagent[2098]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 6 23:46:10.354167 waagent[2098]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 6 23:46:10.354167 waagent[2098]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 6 23:46:10.354582 waagent[2098]: 2025-07-06T23:46:10.354554Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 6 23:46:10.354714 waagent[2098]: 2025-07-06T23:46:10.354691Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 6 23:46:10.355004 waagent[2098]: 2025-07-06T23:46:10.354967Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 6 23:46:10.355109 waagent[2098]: 2025-07-06T23:46:10.355077Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 6 23:46:10.355222 waagent[2098]: 2025-07-06T23:46:10.355194Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 6 23:46:10.355488 waagent[2098]: 2025-07-06T23:46:10.355458Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 6 23:46:10.355593 waagent[2098]: 2025-07-06T23:46:10.355564Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 6 23:46:10.355664 waagent[2098]: 2025-07-06T23:46:10.355636Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 6 23:46:10.355941 waagent[2098]: 2025-07-06T23:46:10.355907Z INFO EnvHandler ExtHandler Configure routes Jul 6 23:46:10.356616 waagent[2098]: 2025-07-06T23:46:10.356590Z INFO EnvHandler ExtHandler Gateway:None Jul 6 23:46:10.357027 waagent[2098]: 2025-07-06T23:46:10.357000Z INFO EnvHandler ExtHandler Routes:None Jul 6 23:46:10.362149 waagent[2098]: 2025-07-06T23:46:10.362117Z INFO ExtHandler ExtHandler Jul 6 23:46:10.362264 waagent[2098]: 2025-07-06T23:46:10.362244Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: fa7de1ed-da43-4ba9-8e38-407f97212adb correlation 4827085e-e0c5-422b-b73d-5a9cebba9963 created: 2025-07-06T23:45:01.155724Z] Jul 6 23:46:10.362586 waagent[2098]: 2025-07-06T23:46:10.362556Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 6 23:46:10.363091 waagent[2098]: 2025-07-06T23:46:10.363061Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jul 6 23:46:10.386384 waagent[2098]: 2025-07-06T23:46:10.386340Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jul 6 23:46:10.386384 waagent[2098]: Try `iptables -h' or 'iptables --help' for more information.) Jul 6 23:46:10.386661 waagent[2098]: 2025-07-06T23:46:10.386628Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 2D5EB1B5-AF28-4C5D-A69E-7E7F915CB7DB;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jul 6 23:46:10.406590 waagent[2098]: 2025-07-06T23:46:10.406544Z INFO MonitorHandler ExtHandler Network interfaces: Jul 6 23:46:10.406590 waagent[2098]: Executing ['ip', '-a', '-o', 'link']: Jul 6 23:46:10.406590 waagent[2098]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 6 23:46:10.406590 waagent[2098]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:06:3b:5e brd ff:ff:ff:ff:ff:ff Jul 6 23:46:10.406590 waagent[2098]: 3: enP53095s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:06:3b:5e brd ff:ff:ff:ff:ff:ff\ altname enP53095p0s2 Jul 6 23:46:10.406590 waagent[2098]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 6 23:46:10.406590 waagent[2098]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 6 23:46:10.406590 waagent[2098]: 2: eth0 inet 10.200.20.37/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 6 23:46:10.406590 waagent[2098]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 6 23:46:10.406590 waagent[2098]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 6 23:46:10.406590 waagent[2098]: 2: eth0 inet6 fe80::20d:3aff:fe06:3b5e/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 6 23:46:10.406590 waagent[2098]: 3: enP53095s1 inet6 fe80::20d:3aff:fe06:3b5e/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 6 23:46:10.482779 waagent[2098]: 2025-07-06T23:46:10.482119Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jul 6 23:46:10.482779 waagent[2098]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:46:10.482779 waagent[2098]: pkts bytes target prot opt in out source destination Jul 6 23:46:10.482779 waagent[2098]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:46:10.482779 waagent[2098]: pkts bytes target prot opt in out source destination Jul 6 23:46:10.482779 waagent[2098]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:46:10.482779 waagent[2098]: pkts bytes target prot opt in out source destination Jul 6 23:46:10.482779 waagent[2098]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 6 23:46:10.482779 waagent[2098]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 6 23:46:10.482779 waagent[2098]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 6 23:46:10.484473 waagent[2098]: 2025-07-06T23:46:10.484442Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 6 23:46:10.484473 waagent[2098]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:46:10.484473 waagent[2098]: pkts bytes target prot opt in out source destination Jul 6 23:46:10.484473 waagent[2098]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:46:10.484473 waagent[2098]: pkts bytes target prot opt in out source destination Jul 6 23:46:10.484473 waagent[2098]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:46:10.484473 waagent[2098]: pkts bytes target prot opt in out source destination Jul 6 23:46:10.484473 waagent[2098]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 6 23:46:10.484473 waagent[2098]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 6 23:46:10.484473 waagent[2098]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 6 23:46:10.484874 waagent[2098]: 2025-07-06T23:46:10.484852Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jul 6 23:46:15.488623 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:46:15.490024 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:46:15.581132 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:46:15.589095 (kubelet)[2248]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:46:15.706257 kubelet[2248]: E0706 23:46:15.706203 2248 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:46:15.708905 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:46:15.709014 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:46:15.709312 systemd[1]: kubelet.service: Consumed 110ms CPU time, 106.2M memory peak. Jul 6 23:46:17.960894 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:46:17.961830 systemd[1]: Started sshd@0-10.200.20.37:22-10.200.16.10:42772.service - OpenSSH per-connection server daemon (10.200.16.10:42772). Jul 6 23:46:18.585527 sshd[2256]: Accepted publickey for core from 10.200.16.10 port 42772 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:46:18.586625 sshd-session[2256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:46:18.590731 systemd-logind[1858]: New session 3 of user core. Jul 6 23:46:18.599848 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:46:19.022931 systemd[1]: Started sshd@1-10.200.20.37:22-10.200.16.10:42774.service - OpenSSH per-connection server daemon (10.200.16.10:42774). Jul 6 23:46:19.501536 sshd[2261]: Accepted publickey for core from 10.200.16.10 port 42774 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:46:19.502622 sshd-session[2261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:46:19.506024 systemd-logind[1858]: New session 4 of user core. Jul 6 23:46:19.510918 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:46:19.841824 sshd[2263]: Connection closed by 10.200.16.10 port 42774 Jul 6 23:46:19.842422 sshd-session[2261]: pam_unix(sshd:session): session closed for user core Jul 6 23:46:19.845227 systemd[1]: sshd@1-10.200.20.37:22-10.200.16.10:42774.service: Deactivated successfully. Jul 6 23:46:19.846612 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:46:19.847194 systemd-logind[1858]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:46:19.848442 systemd-logind[1858]: Removed session 4. Jul 6 23:46:19.924955 systemd[1]: Started sshd@2-10.200.20.37:22-10.200.16.10:44956.service - OpenSSH per-connection server daemon (10.200.16.10:44956). Jul 6 23:46:20.402631 sshd[2269]: Accepted publickey for core from 10.200.16.10 port 44956 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:46:20.403730 sshd-session[2269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:46:20.407259 systemd-logind[1858]: New session 5 of user core. Jul 6 23:46:20.417940 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:46:20.740491 sshd[2271]: Connection closed by 10.200.16.10 port 44956 Jul 6 23:46:20.740405 sshd-session[2269]: pam_unix(sshd:session): session closed for user core Jul 6 23:46:20.742720 systemd[1]: sshd@2-10.200.20.37:22-10.200.16.10:44956.service: Deactivated successfully. Jul 6 23:46:20.744056 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:46:20.745111 systemd-logind[1858]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:46:20.746632 systemd-logind[1858]: Removed session 5. Jul 6 23:46:20.829920 systemd[1]: Started sshd@3-10.200.20.37:22-10.200.16.10:44968.service - OpenSSH per-connection server daemon (10.200.16.10:44968). Jul 6 23:46:21.306782 sshd[2277]: Accepted publickey for core from 10.200.16.10 port 44968 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:46:21.307801 sshd-session[2277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:46:21.311284 systemd-logind[1858]: New session 6 of user core. Jul 6 23:46:21.322852 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:46:21.662352 sshd[2279]: Connection closed by 10.200.16.10 port 44968 Jul 6 23:46:21.662790 sshd-session[2277]: pam_unix(sshd:session): session closed for user core Jul 6 23:46:21.665446 systemd[1]: sshd@3-10.200.20.37:22-10.200.16.10:44968.service: Deactivated successfully. Jul 6 23:46:21.666671 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:46:21.667580 systemd-logind[1858]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:46:21.668545 systemd-logind[1858]: Removed session 6. Jul 6 23:46:21.757924 systemd[1]: Started sshd@4-10.200.20.37:22-10.200.16.10:44974.service - OpenSSH per-connection server daemon (10.200.16.10:44974). Jul 6 23:46:22.233600 sshd[2285]: Accepted publickey for core from 10.200.16.10 port 44974 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:46:22.234649 sshd-session[2285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:46:22.238449 systemd-logind[1858]: New session 7 of user core. Jul 6 23:46:22.244860 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:46:22.627283 sudo[2288]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:46:22.627491 sudo[2288]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:46:22.656412 sudo[2288]: pam_unix(sudo:session): session closed for user root Jul 6 23:46:22.737533 sshd[2287]: Connection closed by 10.200.16.10 port 44974 Jul 6 23:46:22.738434 sshd-session[2285]: pam_unix(sshd:session): session closed for user core Jul 6 23:46:22.741503 systemd[1]: sshd@4-10.200.20.37:22-10.200.16.10:44974.service: Deactivated successfully. Jul 6 23:46:22.743068 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:46:22.743657 systemd-logind[1858]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:46:22.745097 systemd-logind[1858]: Removed session 7. Jul 6 23:46:22.826214 systemd[1]: Started sshd@5-10.200.20.37:22-10.200.16.10:44980.service - OpenSSH per-connection server daemon (10.200.16.10:44980). Jul 6 23:46:23.312342 sshd[2294]: Accepted publickey for core from 10.200.16.10 port 44980 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:46:23.313465 sshd-session[2294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:46:23.317297 systemd-logind[1858]: New session 8 of user core. Jul 6 23:46:23.325847 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 6 23:46:23.578916 sudo[2298]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:46:23.579118 sudo[2298]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:46:23.588085 sudo[2298]: pam_unix(sudo:session): session closed for user root Jul 6 23:46:23.591273 sudo[2297]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 6 23:46:23.591454 sudo[2297]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:46:23.598459 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:46:23.624274 augenrules[2320]: No rules Jul 6 23:46:23.625306 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:46:23.625473 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:46:23.626873 sudo[2297]: pam_unix(sudo:session): session closed for user root Jul 6 23:46:23.716004 sshd[2296]: Connection closed by 10.200.16.10 port 44980 Jul 6 23:46:23.716417 sshd-session[2294]: pam_unix(sshd:session): session closed for user core Jul 6 23:46:23.718665 systemd[1]: sshd@5-10.200.20.37:22-10.200.16.10:44980.service: Deactivated successfully. Jul 6 23:46:23.719846 systemd[1]: session-8.scope: Deactivated successfully. Jul 6 23:46:23.720386 systemd-logind[1858]: Session 8 logged out. Waiting for processes to exit. Jul 6 23:46:23.722302 systemd-logind[1858]: Removed session 8. Jul 6 23:46:23.804492 systemd[1]: Started sshd@6-10.200.20.37:22-10.200.16.10:44984.service - OpenSSH per-connection server daemon (10.200.16.10:44984). Jul 6 23:46:24.282361 sshd[2329]: Accepted publickey for core from 10.200.16.10 port 44984 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:46:24.283419 sshd-session[2329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:46:24.287043 systemd-logind[1858]: New session 9 of user core. Jul 6 23:46:24.295022 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 6 23:46:24.548637 sudo[2332]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:46:24.549068 sudo[2332]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:46:25.738944 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 6 23:46:25.740482 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:46:25.842434 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:46:25.844661 (kubelet)[2356]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:46:25.936067 kubelet[2356]: E0706 23:46:25.936028 2356 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:46:25.938011 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:46:25.938196 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:46:25.938658 systemd[1]: kubelet.service: Consumed 98ms CPU time, 104.8M memory peak. Jul 6 23:46:26.353111 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:46:26.361956 (dockerd)[2364]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:46:27.055274 dockerd[2364]: time="2025-07-06T23:46:27.053808416Z" level=info msg="Starting up" Jul 6 23:46:27.056329 dockerd[2364]: time="2025-07-06T23:46:27.056243872Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 6 23:46:27.093522 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4077135389-merged.mount: Deactivated successfully. Jul 6 23:46:27.156241 dockerd[2364]: time="2025-07-06T23:46:27.156175000Z" level=info msg="Loading containers: start." Jul 6 23:46:27.198757 kernel: Initializing XFRM netlink socket Jul 6 23:46:27.475822 systemd-networkd[1614]: docker0: Link UP Jul 6 23:46:27.490287 dockerd[2364]: time="2025-07-06T23:46:27.490249936Z" level=info msg="Loading containers: done." Jul 6 23:46:27.518043 dockerd[2364]: time="2025-07-06T23:46:27.517982144Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:46:27.518189 dockerd[2364]: time="2025-07-06T23:46:27.518089400Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 6 23:46:27.518208 dockerd[2364]: time="2025-07-06T23:46:27.518189432Z" level=info msg="Initializing buildkit" Jul 6 23:46:27.565063 dockerd[2364]: time="2025-07-06T23:46:27.565009392Z" level=info msg="Completed buildkit initialization" Jul 6 23:46:27.570188 dockerd[2364]: time="2025-07-06T23:46:27.570150768Z" level=info msg="Daemon has completed initialization" Jul 6 23:46:27.570526 dockerd[2364]: time="2025-07-06T23:46:27.570198344Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:46:27.570424 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:46:28.086406 chronyd[1849]: Selected source PHC0 Jul 6 23:46:28.091184 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2607284273-merged.mount: Deactivated successfully. Jul 6 23:46:28.494267 containerd[1874]: time="2025-07-06T23:46:28.494216200Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 6 23:46:29.404319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4195841350.mount: Deactivated successfully. Jul 6 23:46:30.615831 containerd[1874]: time="2025-07-06T23:46:30.615189649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:30.620136 containerd[1874]: time="2025-07-06T23:46:30.620110753Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=26328194" Jul 6 23:46:30.623711 containerd[1874]: time="2025-07-06T23:46:30.623682833Z" level=info msg="ImageCreate event name:\"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:30.628507 containerd[1874]: time="2025-07-06T23:46:30.628482145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:30.629025 containerd[1874]: time="2025-07-06T23:46:30.628964945Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"26324994\" in 2.134714297s" Jul 6 23:46:30.629076 containerd[1874]: time="2025-07-06T23:46:30.629027889Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jul 6 23:46:30.629607 containerd[1874]: time="2025-07-06T23:46:30.629578833Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 6 23:46:31.948529 containerd[1874]: time="2025-07-06T23:46:31.948478337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:31.959119 containerd[1874]: time="2025-07-06T23:46:31.959078857Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=22529228" Jul 6 23:46:31.966855 containerd[1874]: time="2025-07-06T23:46:31.966813521Z" level=info msg="ImageCreate event name:\"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:31.980393 containerd[1874]: time="2025-07-06T23:46:31.980334753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:31.981124 containerd[1874]: time="2025-07-06T23:46:31.980785129Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"24065018\" in 1.35117912s" Jul 6 23:46:31.981124 containerd[1874]: time="2025-07-06T23:46:31.980812137Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jul 6 23:46:31.981346 containerd[1874]: time="2025-07-06T23:46:31.981318577Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 6 23:46:33.058380 containerd[1874]: time="2025-07-06T23:46:33.058303825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:33.060780 containerd[1874]: time="2025-07-06T23:46:33.060752329Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=17484141" Jul 6 23:46:33.064094 containerd[1874]: time="2025-07-06T23:46:33.064053361Z" level=info msg="ImageCreate event name:\"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:33.068213 containerd[1874]: time="2025-07-06T23:46:33.068173857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:33.069255 containerd[1874]: time="2025-07-06T23:46:33.069147705Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"19019949\" in 1.087801776s" Jul 6 23:46:33.069255 containerd[1874]: time="2025-07-06T23:46:33.069175105Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jul 6 23:46:33.069688 containerd[1874]: time="2025-07-06T23:46:33.069638777Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 6 23:46:34.238135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount593340024.mount: Deactivated successfully. Jul 6 23:46:34.532520 containerd[1874]: time="2025-07-06T23:46:34.532382825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:34.535313 containerd[1874]: time="2025-07-06T23:46:34.535283921Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378406" Jul 6 23:46:34.537713 containerd[1874]: time="2025-07-06T23:46:34.537669689Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:34.540629 containerd[1874]: time="2025-07-06T23:46:34.540591273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:34.541026 containerd[1874]: time="2025-07-06T23:46:34.540830129Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 1.471062952s" Jul 6 23:46:34.541026 containerd[1874]: time="2025-07-06T23:46:34.540855481Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 6 23:46:34.541299 containerd[1874]: time="2025-07-06T23:46:34.541275945Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 6 23:46:35.379279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1241933819.mount: Deactivated successfully. Jul 6 23:46:35.988567 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 6 23:46:35.989868 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:46:37.008522 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:46:37.010931 (kubelet)[2663]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:46:37.037691 kubelet[2663]: E0706 23:46:37.037635 2663 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:46:37.039396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:46:37.039506 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:46:37.039990 systemd[1]: kubelet.service: Consumed 101ms CPU time, 104.1M memory peak. Jul 6 23:46:37.542260 containerd[1874]: time="2025-07-06T23:46:37.542216558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:37.546652 containerd[1874]: time="2025-07-06T23:46:37.546626010Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jul 6 23:46:37.550422 containerd[1874]: time="2025-07-06T23:46:37.550385514Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:37.554804 containerd[1874]: time="2025-07-06T23:46:37.554763357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:37.555263 containerd[1874]: time="2025-07-06T23:46:37.555149768Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 3.013844343s" Jul 6 23:46:37.555263 containerd[1874]: time="2025-07-06T23:46:37.555179633Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 6 23:46:37.555594 containerd[1874]: time="2025-07-06T23:46:37.555572101Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:46:38.115760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3949122429.mount: Deactivated successfully. Jul 6 23:46:38.146778 containerd[1874]: time="2025-07-06T23:46:38.146609035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:46:38.148810 containerd[1874]: time="2025-07-06T23:46:38.148667216Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 6 23:46:38.154869 containerd[1874]: time="2025-07-06T23:46:38.154845905Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:46:38.159603 containerd[1874]: time="2025-07-06T23:46:38.159550341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:46:38.160213 containerd[1874]: time="2025-07-06T23:46:38.159966562Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 604.367629ms" Jul 6 23:46:38.160213 containerd[1874]: time="2025-07-06T23:46:38.159994915Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 6 23:46:38.160589 containerd[1874]: time="2025-07-06T23:46:38.160567804Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 6 23:46:38.884192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3550841368.mount: Deactivated successfully. Jul 6 23:46:40.959162 containerd[1874]: time="2025-07-06T23:46:40.959110979Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:40.961773 containerd[1874]: time="2025-07-06T23:46:40.961731714Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812469" Jul 6 23:46:40.964444 containerd[1874]: time="2025-07-06T23:46:40.964406457Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:40.974381 containerd[1874]: time="2025-07-06T23:46:40.974342690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:40.975054 containerd[1874]: time="2025-07-06T23:46:40.974946412Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.814349344s" Jul 6 23:46:40.975054 containerd[1874]: time="2025-07-06T23:46:40.974973149Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 6 23:46:43.506610 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:46:43.506830 systemd[1]: kubelet.service: Consumed 101ms CPU time, 104.1M memory peak. Jul 6 23:46:43.515140 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:46:43.529982 systemd[1]: Reload requested from client PID 2785 ('systemctl') (unit session-9.scope)... Jul 6 23:46:43.530101 systemd[1]: Reloading... Jul 6 23:46:43.625770 zram_generator::config[2832]: No configuration found. Jul 6 23:46:43.684654 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:46:43.768482 systemd[1]: Reloading finished in 238 ms. Jul 6 23:46:43.814159 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 6 23:46:43.814218 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 6 23:46:43.814525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:46:43.814571 systemd[1]: kubelet.service: Consumed 73ms CPU time, 95M memory peak. Jul 6 23:46:43.816221 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:46:44.082636 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:46:44.091111 (kubelet)[2899]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:46:44.115567 kubelet[2899]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:46:44.115567 kubelet[2899]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:46:44.115567 kubelet[2899]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:46:44.115868 kubelet[2899]: I0706 23:46:44.115570 2899 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:46:44.460794 kubelet[2899]: I0706 23:46:44.460676 2899 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 6 23:46:44.460794 kubelet[2899]: I0706 23:46:44.460710 2899 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:46:44.461299 kubelet[2899]: I0706 23:46:44.461274 2899 server.go:954] "Client rotation is on, will bootstrap in background" Jul 6 23:46:44.474276 kubelet[2899]: E0706 23:46:44.474240 2899 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:46:44.476240 kubelet[2899]: I0706 23:46:44.476144 2899 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:46:44.481334 kubelet[2899]: I0706 23:46:44.481316 2899 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 6 23:46:44.483631 kubelet[2899]: I0706 23:46:44.483611 2899 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:46:44.484109 kubelet[2899]: I0706 23:46:44.484076 2899 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:46:44.484236 kubelet[2899]: I0706 23:46:44.484112 2899 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.1-a-ba147b1783","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:46:44.484315 kubelet[2899]: I0706 23:46:44.484245 2899 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:46:44.484315 kubelet[2899]: I0706 23:46:44.484252 2899 container_manager_linux.go:304] "Creating device plugin manager" Jul 6 23:46:44.484375 kubelet[2899]: I0706 23:46:44.484363 2899 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:46:44.485914 kubelet[2899]: I0706 23:46:44.485899 2899 kubelet.go:446] "Attempting to sync node with API server" Jul 6 23:46:44.485949 kubelet[2899]: I0706 23:46:44.485917 2899 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:46:44.485949 kubelet[2899]: I0706 23:46:44.485937 2899 kubelet.go:352] "Adding apiserver pod source" Jul 6 23:46:44.485949 kubelet[2899]: I0706 23:46:44.485945 2899 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:46:44.490850 kubelet[2899]: W0706 23:46:44.490813 2899 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jul 6 23:46:44.490916 kubelet[2899]: E0706 23:46:44.490858 2899 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:46:44.490936 kubelet[2899]: W0706 23:46:44.490909 2899 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.1-a-ba147b1783&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jul 6 23:46:44.490936 kubelet[2899]: E0706 23:46:44.490928 2899 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.1-a-ba147b1783&limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:46:44.490996 kubelet[2899]: I0706 23:46:44.490983 2899 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 6 23:46:44.491312 kubelet[2899]: I0706 23:46:44.491299 2899 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:46:44.491356 kubelet[2899]: W0706 23:46:44.491346 2899 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:46:44.491805 kubelet[2899]: I0706 23:46:44.491788 2899 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:46:44.491852 kubelet[2899]: I0706 23:46:44.491816 2899 server.go:1287] "Started kubelet" Jul 6 23:46:44.492318 kubelet[2899]: I0706 23:46:44.492276 2899 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:46:44.492569 kubelet[2899]: I0706 23:46:44.492555 2899 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:46:44.493507 kubelet[2899]: I0706 23:46:44.493488 2899 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:46:44.494012 kubelet[2899]: E0706 23:46:44.493631 2899 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.37:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.37:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344.1.1-a-ba147b1783.184fce4bad102e31 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.1.1-a-ba147b1783,UID:ci-4344.1.1-a-ba147b1783,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.1.1-a-ba147b1783,},FirstTimestamp:2025-07-06 23:46:44.491800113 +0000 UTC m=+0.398120450,LastTimestamp:2025-07-06 23:46:44.491800113 +0000 UTC m=+0.398120450,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.1.1-a-ba147b1783,}" Jul 6 23:46:44.494889 kubelet[2899]: I0706 23:46:44.494862 2899 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:46:44.495782 kubelet[2899]: I0706 23:46:44.495664 2899 server.go:479] "Adding debug handlers to kubelet server" Jul 6 23:46:44.496276 kubelet[2899]: I0706 23:46:44.496246 2899 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:46:44.498544 kubelet[2899]: I0706 23:46:44.498368 2899 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:46:44.498628 kubelet[2899]: E0706 23:46:44.498534 2899 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-ba147b1783\" not found" Jul 6 23:46:44.498712 kubelet[2899]: I0706 23:46:44.498703 2899 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:46:44.498854 kubelet[2899]: I0706 23:46:44.498795 2899 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:46:44.499185 kubelet[2899]: W0706 23:46:44.499159 2899 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jul 6 23:46:44.499280 kubelet[2899]: E0706 23:46:44.499264 2899 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:46:44.499400 kubelet[2899]: E0706 23:46:44.499386 2899 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:46:44.499571 kubelet[2899]: I0706 23:46:44.499559 2899 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:46:44.499684 kubelet[2899]: I0706 23:46:44.499670 2899 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:46:44.501436 kubelet[2899]: E0706 23:46:44.501125 2899 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-a-ba147b1783?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="200ms" Jul 6 23:46:44.503909 kubelet[2899]: I0706 23:46:44.503546 2899 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:46:44.527469 kubelet[2899]: I0706 23:46:44.527450 2899 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:46:44.527597 kubelet[2899]: I0706 23:46:44.527587 2899 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:46:44.527655 kubelet[2899]: I0706 23:46:44.527643 2899 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:46:44.590696 kubelet[2899]: I0706 23:46:44.590666 2899 policy_none.go:49] "None policy: Start" Jul 6 23:46:44.590906 kubelet[2899]: I0706 23:46:44.590877 2899 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:46:44.590952 kubelet[2899]: I0706 23:46:44.590919 2899 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:46:44.598192 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 6 23:46:44.599748 kubelet[2899]: E0706 23:46:44.599724 2899 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-ba147b1783\" not found" Jul 6 23:46:44.605383 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 6 23:46:44.608179 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 6 23:46:44.615376 kubelet[2899]: I0706 23:46:44.615357 2899 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:46:44.615628 kubelet[2899]: I0706 23:46:44.615614 2899 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:46:44.615726 kubelet[2899]: I0706 23:46:44.615696 2899 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:46:44.616013 kubelet[2899]: I0706 23:46:44.615998 2899 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:46:44.617208 kubelet[2899]: E0706 23:46:44.617128 2899 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:46:44.617208 kubelet[2899]: E0706 23:46:44.617194 2899 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344.1.1-a-ba147b1783\" not found" Jul 6 23:46:44.637712 kubelet[2899]: I0706 23:46:44.637662 2899 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:46:44.639165 kubelet[2899]: I0706 23:46:44.639138 2899 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:46:44.639165 kubelet[2899]: I0706 23:46:44.639162 2899 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 6 23:46:44.639259 kubelet[2899]: I0706 23:46:44.639180 2899 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:46:44.639259 kubelet[2899]: I0706 23:46:44.639185 2899 kubelet.go:2382] "Starting kubelet main sync loop" Jul 6 23:46:44.639259 kubelet[2899]: E0706 23:46:44.639218 2899 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jul 6 23:46:44.640603 kubelet[2899]: W0706 23:46:44.640148 2899 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jul 6 23:46:44.640603 kubelet[2899]: E0706 23:46:44.640192 2899 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:46:44.702490 kubelet[2899]: E0706 23:46:44.702448 2899 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-a-ba147b1783?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="400ms" Jul 6 23:46:44.718257 kubelet[2899]: I0706 23:46:44.718132 2899 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-a-ba147b1783" Jul 6 23:46:44.719310 kubelet[2899]: E0706 23:46:44.719260 2899 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-4344.1.1-a-ba147b1783" Jul 6 23:46:44.748020 systemd[1]: Created slice kubepods-burstable-podd17cb6f16f61e7b6d5ad3590a29897ad.slice - libcontainer container kubepods-burstable-podd17cb6f16f61e7b6d5ad3590a29897ad.slice. Jul 6 23:46:44.759477 kubelet[2899]: E0706 23:46:44.759340 2899 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-a-ba147b1783\" not found" node="ci-4344.1.1-a-ba147b1783" Jul 6 23:46:44.761576 systemd[1]: Created slice kubepods-burstable-pod7b0000eb0ba63b05fd354c6af094af5d.slice - libcontainer container kubepods-burstable-pod7b0000eb0ba63b05fd354c6af094af5d.slice. Jul 6 23:46:44.763366 kubelet[2899]: E0706 23:46:44.763340 2899 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-a-ba147b1783\" not found" node="ci-4344.1.1-a-ba147b1783" Jul 6 23:46:44.770663 systemd[1]: Created slice kubepods-burstable-poda081a88f61152832c52e85e48dcf9231.slice - libcontainer container kubepods-burstable-poda081a88f61152832c52e85e48dcf9231.slice. Jul 6 23:46:44.772040 kubelet[2899]: E0706 23:46:44.772021 2899 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-a-ba147b1783\" not found" node="ci-4344.1.1-a-ba147b1783" Jul 6 23:46:44.800431 kubelet[2899]: I0706 23:46:44.800407 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b0000eb0ba63b05fd354c6af094af5d-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.1-a-ba147b1783\" (UID: \"7b0000eb0ba63b05fd354c6af094af5d\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-ba147b1783" Jul 6 23:46:44.800494 kubelet[2899]: I0706 23:46:44.800436 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7b0000eb0ba63b05fd354c6af094af5d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.1-a-ba147b1783\" (UID: \"7b0000eb0ba63b05fd354c6af094af5d\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-ba147b1783" Jul 6 23:46:44.800494 kubelet[2899]: I0706 23:46:44.800452 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a081a88f61152832c52e85e48dcf9231-kubeconfig\") pod \"kube-scheduler-ci-4344.1.1-a-ba147b1783\" (UID: \"a081a88f61152832c52e85e48dcf9231\") " pod="kube-system/kube-scheduler-ci-4344.1.1-a-ba147b1783" Jul 6 23:46:44.800494 kubelet[2899]: I0706 23:46:44.800465 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d17cb6f16f61e7b6d5ad3590a29897ad-ca-certs\") pod \"kube-apiserver-ci-4344.1.1-a-ba147b1783\" (UID: \"d17cb6f16f61e7b6d5ad3590a29897ad\") " pod="kube-system/kube-apiserver-ci-4344.1.1-a-ba147b1783" Jul 6 23:46:44.800494 kubelet[2899]: I0706 23:46:44.800474 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d17cb6f16f61e7b6d5ad3590a29897ad-k8s-certs\") pod \"kube-apiserver-ci-4344.1.1-a-ba147b1783\" (UID: \"d17cb6f16f61e7b6d5ad3590a29897ad\") " pod="kube-system/kube-apiserver-ci-4344.1.1-a-ba147b1783" Jul 6 23:46:44.800494 kubelet[2899]: I0706 23:46:44.800483 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7b0000eb0ba63b05fd354c6af094af5d-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.1-a-ba147b1783\" (UID: \"7b0000eb0ba63b05fd354c6af094af5d\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-ba147b1783" Jul 6 23:46:44.800577 kubelet[2899]: I0706 23:46:44.800495 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d17cb6f16f61e7b6d5ad3590a29897ad-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.1-a-ba147b1783\" (UID: \"d17cb6f16f61e7b6d5ad3590a29897ad\") " pod="kube-system/kube-apiserver-ci-4344.1.1-a-ba147b1783" Jul 6 23:46:44.800577 kubelet[2899]: I0706 23:46:44.800504 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7b0000eb0ba63b05fd354c6af094af5d-ca-certs\") pod \"kube-controller-manager-ci-4344.1.1-a-ba147b1783\" (UID: \"7b0000eb0ba63b05fd354c6af094af5d\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-ba147b1783" Jul 6 23:46:44.800577 kubelet[2899]: I0706 23:46:44.800518 2899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7b0000eb0ba63b05fd354c6af094af5d-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.1-a-ba147b1783\" (UID: \"7b0000eb0ba63b05fd354c6af094af5d\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-ba147b1783" Jul 6 23:46:44.921756 kubelet[2899]: I0706 23:46:44.921714 2899 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-a-ba147b1783" Jul 6 23:46:44.922106 kubelet[2899]: E0706 23:46:44.922075 2899 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-4344.1.1-a-ba147b1783" Jul 6 23:46:45.060661 containerd[1874]: time="2025-07-06T23:46:45.060602302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.1-a-ba147b1783,Uid:d17cb6f16f61e7b6d5ad3590a29897ad,Namespace:kube-system,Attempt:0,}" Jul 6 23:46:45.064295 containerd[1874]: time="2025-07-06T23:46:45.064198382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.1-a-ba147b1783,Uid:7b0000eb0ba63b05fd354c6af094af5d,Namespace:kube-system,Attempt:0,}" Jul 6 23:46:45.072770 containerd[1874]: time="2025-07-06T23:46:45.072694358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.1-a-ba147b1783,Uid:a081a88f61152832c52e85e48dcf9231,Namespace:kube-system,Attempt:0,}" Jul 6 23:46:45.103659 kubelet[2899]: E0706 23:46:45.103624 2899 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-a-ba147b1783?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="800ms" Jul 6 23:46:45.229077 containerd[1874]: time="2025-07-06T23:46:45.229015087Z" level=info msg="connecting to shim 2e377b3ca2c459df2d7558221122c418671eda1dbce7744a3d85d550568a9f63" address="unix:///run/containerd/s/b9de10ccb69b8a754b95edcb9556472c8c6657261fea18c250278379615f9a94" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:46:45.249360 containerd[1874]: time="2025-07-06T23:46:45.249266258Z" level=info msg="connecting to shim 3daafefad97e6ae0f99f2bf2cdc6c25a8301ac1609781bcc38ac4695a141eee4" address="unix:///run/containerd/s/2519c0b5662289b30d19dd6adfcb5728b0f6604961e4a093acdf849b17b86f1d" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:46:45.252894 systemd[1]: Started cri-containerd-2e377b3ca2c459df2d7558221122c418671eda1dbce7744a3d85d550568a9f63.scope - libcontainer container 2e377b3ca2c459df2d7558221122c418671eda1dbce7744a3d85d550568a9f63. Jul 6 23:46:45.258804 containerd[1874]: time="2025-07-06T23:46:45.258728221Z" level=info msg="connecting to shim c3376b8ba7c76c7d88cd4baef2328d4d325b95b874b0784a0a08950222eff9da" address="unix:///run/containerd/s/8d506863da4101914f9808cd10acbf67e67ec74ca0349affc7cc46025b19d14b" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:46:45.271930 systemd[1]: Started cri-containerd-3daafefad97e6ae0f99f2bf2cdc6c25a8301ac1609781bcc38ac4695a141eee4.scope - libcontainer container 3daafefad97e6ae0f99f2bf2cdc6c25a8301ac1609781bcc38ac4695a141eee4. Jul 6 23:46:45.283858 systemd[1]: Started cri-containerd-c3376b8ba7c76c7d88cd4baef2328d4d325b95b874b0784a0a08950222eff9da.scope - libcontainer container c3376b8ba7c76c7d88cd4baef2328d4d325b95b874b0784a0a08950222eff9da. Jul 6 23:46:45.309563 containerd[1874]: time="2025-07-06T23:46:45.309502774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.1-a-ba147b1783,Uid:d17cb6f16f61e7b6d5ad3590a29897ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e377b3ca2c459df2d7558221122c418671eda1dbce7744a3d85d550568a9f63\"" Jul 6 23:46:45.315946 containerd[1874]: time="2025-07-06T23:46:45.315619823Z" level=info msg="CreateContainer within sandbox \"2e377b3ca2c459df2d7558221122c418671eda1dbce7744a3d85d550568a9f63\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:46:45.324592 containerd[1874]: time="2025-07-06T23:46:45.324566315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.1-a-ba147b1783,Uid:a081a88f61152832c52e85e48dcf9231,Namespace:kube-system,Attempt:0,} returns sandbox id \"3daafefad97e6ae0f99f2bf2cdc6c25a8301ac1609781bcc38ac4695a141eee4\"" Jul 6 23:46:45.324857 kubelet[2899]: I0706 23:46:45.324814 2899 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-a-ba147b1783" Jul 6 23:46:45.327337 kubelet[2899]: E0706 23:46:45.325133 2899 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-4344.1.1-a-ba147b1783" Jul 6 23:46:45.328344 containerd[1874]: time="2025-07-06T23:46:45.328190324Z" level=info msg="CreateContainer within sandbox \"3daafefad97e6ae0f99f2bf2cdc6c25a8301ac1609781bcc38ac4695a141eee4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:46:45.329754 containerd[1874]: time="2025-07-06T23:46:45.329713856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.1-a-ba147b1783,Uid:7b0000eb0ba63b05fd354c6af094af5d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3376b8ba7c76c7d88cd4baef2328d4d325b95b874b0784a0a08950222eff9da\"" Jul 6 23:46:45.331575 containerd[1874]: time="2025-07-06T23:46:45.331490084Z" level=info msg="CreateContainer within sandbox \"c3376b8ba7c76c7d88cd4baef2328d4d325b95b874b0784a0a08950222eff9da\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:46:45.337019 kubelet[2899]: W0706 23:46:45.336952 2899 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jul 6 23:46:45.337019 kubelet[2899]: E0706 23:46:45.336997 2899 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:46:45.352758 containerd[1874]: time="2025-07-06T23:46:45.352603872Z" level=info msg="Container a89d13471f4824fd0add0bbc863263efca7c21e4550f02b3b6c467345f584113: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:46:45.904891 kubelet[2899]: E0706 23:46:45.904849 2899 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-a-ba147b1783?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="1.6s" Jul 6 23:46:46.069337 kubelet[2899]: W0706 23:46:46.069245 2899 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.1-a-ba147b1783&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jul 6 23:46:46.069337 kubelet[2899]: E0706 23:46:46.069309 2899 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.1-a-ba147b1783&limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:46:46.082956 kubelet[2899]: W0706 23:46:46.082917 2899 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jul 6 23:46:46.083036 kubelet[2899]: E0706 23:46:46.082966 2899 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:46:46.127144 kubelet[2899]: I0706 23:46:46.127095 2899 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-a-ba147b1783" Jul 6 23:46:46.127450 kubelet[2899]: E0706 23:46:46.127426 2899 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-4344.1.1-a-ba147b1783" Jul 6 23:46:46.186987 kubelet[2899]: W0706 23:46:46.186854 2899 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jul 6 23:46:46.186987 kubelet[2899]: E0706 23:46:46.186892 2899 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:46:46.561573 containerd[1874]: time="2025-07-06T23:46:46.561524748Z" level=info msg="CreateContainer within sandbox \"2e377b3ca2c459df2d7558221122c418671eda1dbce7744a3d85d550568a9f63\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a89d13471f4824fd0add0bbc863263efca7c21e4550f02b3b6c467345f584113\"" Jul 6 23:46:46.562112 containerd[1874]: time="2025-07-06T23:46:46.562091581Z" level=info msg="StartContainer for \"a89d13471f4824fd0add0bbc863263efca7c21e4550f02b3b6c467345f584113\"" Jul 6 23:46:46.562921 containerd[1874]: time="2025-07-06T23:46:46.562897596Z" level=info msg="connecting to shim a89d13471f4824fd0add0bbc863263efca7c21e4550f02b3b6c467345f584113" address="unix:///run/containerd/s/b9de10ccb69b8a754b95edcb9556472c8c6657261fea18c250278379615f9a94" protocol=ttrpc version=3 Jul 6 23:46:46.569522 containerd[1874]: time="2025-07-06T23:46:46.569460067Z" level=info msg="Container f3d7588b0b49b71485e54fef3a49ca9b262b5dc6da0fbcbee5db19eadc7ccc06: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:46:46.577189 containerd[1874]: time="2025-07-06T23:46:46.577162850Z" level=info msg="Container 8a1f1d0b38874176f1182f4b08129c1fdfe5da4fdddbc08f5f8b9cc112ab6afd: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:46:46.582863 systemd[1]: Started cri-containerd-a89d13471f4824fd0add0bbc863263efca7c21e4550f02b3b6c467345f584113.scope - libcontainer container a89d13471f4824fd0add0bbc863263efca7c21e4550f02b3b6c467345f584113. Jul 6 23:46:46.590039 containerd[1874]: time="2025-07-06T23:46:46.590005831Z" level=info msg="CreateContainer within sandbox \"3daafefad97e6ae0f99f2bf2cdc6c25a8301ac1609781bcc38ac4695a141eee4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f3d7588b0b49b71485e54fef3a49ca9b262b5dc6da0fbcbee5db19eadc7ccc06\"" Jul 6 23:46:46.590228 kubelet[2899]: E0706 23:46:46.589722 2899 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:46:46.590436 containerd[1874]: time="2025-07-06T23:46:46.590322192Z" level=info msg="StartContainer for \"f3d7588b0b49b71485e54fef3a49ca9b262b5dc6da0fbcbee5db19eadc7ccc06\"" Jul 6 23:46:46.591865 containerd[1874]: time="2025-07-06T23:46:46.591832788Z" level=info msg="connecting to shim f3d7588b0b49b71485e54fef3a49ca9b262b5dc6da0fbcbee5db19eadc7ccc06" address="unix:///run/containerd/s/2519c0b5662289b30d19dd6adfcb5728b0f6604961e4a093acdf849b17b86f1d" protocol=ttrpc version=3 Jul 6 23:46:46.605187 containerd[1874]: time="2025-07-06T23:46:46.603463613Z" level=info msg="CreateContainer within sandbox \"c3376b8ba7c76c7d88cd4baef2328d4d325b95b874b0784a0a08950222eff9da\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8a1f1d0b38874176f1182f4b08129c1fdfe5da4fdddbc08f5f8b9cc112ab6afd\"" Jul 6 23:46:46.605187 containerd[1874]: time="2025-07-06T23:46:46.604360807Z" level=info msg="StartContainer for \"8a1f1d0b38874176f1182f4b08129c1fdfe5da4fdddbc08f5f8b9cc112ab6afd\"" Jul 6 23:46:46.605187 containerd[1874]: time="2025-07-06T23:46:46.605034995Z" level=info msg="connecting to shim 8a1f1d0b38874176f1182f4b08129c1fdfe5da4fdddbc08f5f8b9cc112ab6afd" address="unix:///run/containerd/s/8d506863da4101914f9808cd10acbf67e67ec74ca0349affc7cc46025b19d14b" protocol=ttrpc version=3 Jul 6 23:46:46.607922 systemd[1]: Started cri-containerd-f3d7588b0b49b71485e54fef3a49ca9b262b5dc6da0fbcbee5db19eadc7ccc06.scope - libcontainer container f3d7588b0b49b71485e54fef3a49ca9b262b5dc6da0fbcbee5db19eadc7ccc06. Jul 6 23:46:46.627904 systemd[1]: Started cri-containerd-8a1f1d0b38874176f1182f4b08129c1fdfe5da4fdddbc08f5f8b9cc112ab6afd.scope - libcontainer container 8a1f1d0b38874176f1182f4b08129c1fdfe5da4fdddbc08f5f8b9cc112ab6afd. Jul 6 23:46:46.636837 containerd[1874]: time="2025-07-06T23:46:46.636597535Z" level=info msg="StartContainer for \"a89d13471f4824fd0add0bbc863263efca7c21e4550f02b3b6c467345f584113\" returns successfully" Jul 6 23:46:46.660879 kubelet[2899]: E0706 23:46:46.660814 2899 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-a-ba147b1783\" not found" node="ci-4344.1.1-a-ba147b1783" Jul 6 23:46:46.685293 containerd[1874]: time="2025-07-06T23:46:46.685251282Z" level=info msg="StartContainer for \"8a1f1d0b38874176f1182f4b08129c1fdfe5da4fdddbc08f5f8b9cc112ab6afd\" returns successfully" Jul 6 23:46:46.693440 containerd[1874]: time="2025-07-06T23:46:46.693413879Z" level=info msg="StartContainer for \"f3d7588b0b49b71485e54fef3a49ca9b262b5dc6da0fbcbee5db19eadc7ccc06\" returns successfully" Jul 6 23:46:47.667986 kubelet[2899]: E0706 23:46:47.667777 2899 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-a-ba147b1783\" not found" node="ci-4344.1.1-a-ba147b1783" Jul 6 23:46:47.670042 kubelet[2899]: E0706 23:46:47.669930 2899 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-a-ba147b1783\" not found" node="ci-4344.1.1-a-ba147b1783" Jul 6 23:46:47.670447 kubelet[2899]: E0706 23:46:47.670327 2899 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-a-ba147b1783\" not found" node="ci-4344.1.1-a-ba147b1783" Jul 6 23:46:47.729433 kubelet[2899]: I0706 23:46:47.729402 2899 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-a-ba147b1783" Jul 6 23:46:47.766105 kubelet[2899]: E0706 23:46:47.766060 2899 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4344.1.1-a-ba147b1783\" not found" node="ci-4344.1.1-a-ba147b1783" Jul 6 23:46:47.866359 kubelet[2899]: I0706 23:46:47.866288 2899 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.1.1-a-ba147b1783" Jul 6 23:46:47.866359 kubelet[2899]: E0706 23:46:47.866327 2899 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4344.1.1-a-ba147b1783\": node \"ci-4344.1.1-a-ba147b1783\" not found" Jul 6 23:46:47.925809 kubelet[2899]: E0706 23:46:47.925483 2899 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-ba147b1783\" not found" Jul 6 23:46:48.001144 kubelet[2899]: I0706 23:46:48.001105 2899 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.1-a-ba147b1783" Jul 6 23:46:48.013215 kubelet[2899]: E0706 23:46:48.013056 2899 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.1-a-ba147b1783\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344.1.1-a-ba147b1783" Jul 6 23:46:48.013215 kubelet[2899]: I0706 23:46:48.013084 2899 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.1-a-ba147b1783" Jul 6 23:46:48.015323 kubelet[2899]: E0706 23:46:48.015189 2899 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344.1.1-a-ba147b1783\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4344.1.1-a-ba147b1783" Jul 6 23:46:48.015323 kubelet[2899]: I0706 23:46:48.015209 2899 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-a-ba147b1783" Jul 6 23:46:48.017234 kubelet[2899]: E0706 23:46:48.017217 2899 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.1-a-ba147b1783\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344.1.1-a-ba147b1783" Jul 6 23:46:48.490393 kubelet[2899]: I0706 23:46:48.490354 2899 apiserver.go:52] "Watching apiserver" Jul 6 23:46:48.499626 kubelet[2899]: I0706 23:46:48.499589 2899 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:46:48.669765 kubelet[2899]: I0706 23:46:48.669482 2899 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.1-a-ba147b1783" Jul 6 23:46:48.669765 kubelet[2899]: I0706 23:46:48.669622 2899 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-a-ba147b1783" Jul 6 23:46:48.671958 kubelet[2899]: I0706 23:46:48.669830 2899 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.1-a-ba147b1783" Jul 6 23:46:48.679879 kubelet[2899]: W0706 23:46:48.679845 2899 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:46:48.680020 kubelet[2899]: W0706 23:46:48.680000 2899 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:46:48.680780 kubelet[2899]: W0706 23:46:48.680764 2899 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:46:48.899880 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jul 6 23:46:49.671522 kubelet[2899]: I0706 23:46:49.670836 2899 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-a-ba147b1783" Jul 6 23:46:49.671522 kubelet[2899]: I0706 23:46:49.671140 2899 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.1-a-ba147b1783" Jul 6 23:46:49.686269 kubelet[2899]: W0706 23:46:49.686233 2899 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:46:49.686991 kubelet[2899]: E0706 23:46:49.686934 2899 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.1-a-ba147b1783\" already exists" pod="kube-system/kube-scheduler-ci-4344.1.1-a-ba147b1783" Jul 6 23:46:49.687086 kubelet[2899]: W0706 23:46:49.686777 2899 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:46:49.687132 kubelet[2899]: E0706 23:46:49.687099 2899 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.1-a-ba147b1783\" already exists" pod="kube-system/kube-apiserver-ci-4344.1.1-a-ba147b1783" Jul 6 23:46:49.905069 update_engine[1862]: I20250706 23:46:49.905004 1862 update_attempter.cc:509] Updating boot flags... Jul 6 23:46:50.103873 systemd[1]: Reload requested from client PID 3233 ('systemctl') (unit session-9.scope)... Jul 6 23:46:50.103885 systemd[1]: Reloading... Jul 6 23:46:50.176845 zram_generator::config[3282]: No configuration found. Jul 6 23:46:50.240909 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:46:50.331619 systemd[1]: Reloading finished in 227 ms. Jul 6 23:46:50.351167 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:46:50.364601 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:46:50.364839 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:46:50.364899 systemd[1]: kubelet.service: Consumed 680ms CPU time, 127.5M memory peak. Jul 6 23:46:50.366427 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:46:50.551560 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:46:50.557517 (kubelet)[3343]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:46:50.597442 kubelet[3343]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:46:50.597442 kubelet[3343]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:46:50.597442 kubelet[3343]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:46:50.597799 kubelet[3343]: I0706 23:46:50.597503 3343 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:46:50.603769 kubelet[3343]: I0706 23:46:50.603655 3343 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 6 23:46:50.603769 kubelet[3343]: I0706 23:46:50.603685 3343 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:46:50.604431 kubelet[3343]: I0706 23:46:50.604403 3343 server.go:954] "Client rotation is on, will bootstrap in background" Jul 6 23:46:50.606671 kubelet[3343]: I0706 23:46:50.606549 3343 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 6 23:46:50.608913 kubelet[3343]: I0706 23:46:50.608877 3343 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:46:50.612082 kubelet[3343]: I0706 23:46:50.612006 3343 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 6 23:46:50.614583 kubelet[3343]: I0706 23:46:50.614565 3343 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:46:50.614796 kubelet[3343]: I0706 23:46:50.614720 3343 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:46:50.614960 kubelet[3343]: I0706 23:46:50.614758 3343 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.1-a-ba147b1783","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:46:50.614960 kubelet[3343]: I0706 23:46:50.614893 3343 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:46:50.614960 kubelet[3343]: I0706 23:46:50.614899 3343 container_manager_linux.go:304] "Creating device plugin manager" Jul 6 23:46:50.614960 kubelet[3343]: I0706 23:46:50.614934 3343 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:46:50.615077 kubelet[3343]: I0706 23:46:50.615034 3343 kubelet.go:446] "Attempting to sync node with API server" Jul 6 23:46:50.615077 kubelet[3343]: I0706 23:46:50.615042 3343 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:46:50.615077 kubelet[3343]: I0706 23:46:50.615058 3343 kubelet.go:352] "Adding apiserver pod source" Jul 6 23:46:50.615077 kubelet[3343]: I0706 23:46:50.615066 3343 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:46:50.618735 kubelet[3343]: I0706 23:46:50.618024 3343 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 6 23:46:50.618735 kubelet[3343]: I0706 23:46:50.618306 3343 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:46:50.618735 kubelet[3343]: I0706 23:46:50.618592 3343 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:46:50.618735 kubelet[3343]: I0706 23:46:50.618616 3343 server.go:1287] "Started kubelet" Jul 6 23:46:50.622004 kubelet[3343]: I0706 23:46:50.621443 3343 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:46:50.622764 kubelet[3343]: I0706 23:46:50.622412 3343 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:46:50.629097 kubelet[3343]: I0706 23:46:50.629027 3343 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:46:50.629323 kubelet[3343]: I0706 23:46:50.629298 3343 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:46:50.629520 kubelet[3343]: I0706 23:46:50.629498 3343 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:46:50.632543 kubelet[3343]: I0706 23:46:50.632518 3343 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:46:50.632634 kubelet[3343]: E0706 23:46:50.632576 3343 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-ba147b1783\" not found" Jul 6 23:46:50.632847 kubelet[3343]: I0706 23:46:50.632826 3343 server.go:479] "Adding debug handlers to kubelet server" Jul 6 23:46:50.639002 kubelet[3343]: I0706 23:46:50.638976 3343 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:46:50.639282 kubelet[3343]: I0706 23:46:50.639261 3343 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:46:50.642189 kubelet[3343]: I0706 23:46:50.641872 3343 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:46:50.642189 kubelet[3343]: I0706 23:46:50.641971 3343 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:46:50.643682 kubelet[3343]: I0706 23:46:50.643477 3343 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:46:50.644006 kubelet[3343]: I0706 23:46:50.643796 3343 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:46:50.645805 kubelet[3343]: I0706 23:46:50.645516 3343 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:46:50.645805 kubelet[3343]: I0706 23:46:50.645539 3343 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 6 23:46:50.645805 kubelet[3343]: I0706 23:46:50.645568 3343 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:46:50.645805 kubelet[3343]: I0706 23:46:50.645574 3343 kubelet.go:2382] "Starting kubelet main sync loop" Jul 6 23:46:50.645805 kubelet[3343]: E0706 23:46:50.645611 3343 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:46:50.651170 kubelet[3343]: E0706 23:46:50.651035 3343 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:46:50.683119 kubelet[3343]: I0706 23:46:50.683097 3343 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:46:50.683274 kubelet[3343]: I0706 23:46:50.683261 3343 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:46:50.683405 kubelet[3343]: I0706 23:46:50.683319 3343 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:46:50.683571 kubelet[3343]: I0706 23:46:50.683558 3343 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:46:50.683654 kubelet[3343]: I0706 23:46:50.683633 3343 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:46:50.684288 kubelet[3343]: I0706 23:46:50.683687 3343 policy_none.go:49] "None policy: Start" Jul 6 23:46:50.684288 kubelet[3343]: I0706 23:46:50.683703 3343 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:46:50.684288 kubelet[3343]: I0706 23:46:50.683714 3343 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:46:50.684288 kubelet[3343]: I0706 23:46:50.683828 3343 state_mem.go:75] "Updated machine memory state" Jul 6 23:46:50.687895 kubelet[3343]: I0706 23:46:50.687872 3343 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:46:50.688070 kubelet[3343]: I0706 23:46:50.688044 3343 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:46:50.688112 kubelet[3343]: I0706 23:46:50.688065 3343 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:46:50.688922 kubelet[3343]: I0706 23:46:50.688887 3343 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:46:50.690642 kubelet[3343]: E0706 23:46:50.690616 3343 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:46:50.746102 kubelet[3343]: I0706 23:46:50.746068 3343 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-a-ba147b1783" Jul 6 23:46:50.746269 kubelet[3343]: I0706 23:46:50.746253 3343 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.1-a-ba147b1783" Jul 6 23:46:50.746956 kubelet[3343]: I0706 23:46:50.746415 3343 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.1-a-ba147b1783" Jul 6 23:46:50.766766 kubelet[3343]: W0706 23:46:50.766683 3343 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:46:50.767555 kubelet[3343]: E0706 23:46:50.766832 3343 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.1-a-ba147b1783\" already exists" pod="kube-system/kube-scheduler-ci-4344.1.1-a-ba147b1783" Jul 6 23:46:50.767555 kubelet[3343]: W0706 23:46:50.767434 3343 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:46:50.767555 kubelet[3343]: W0706 23:46:50.767454 3343 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:46:50.767555 kubelet[3343]: E0706 23:46:50.767469 3343 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.1-a-ba147b1783\" already exists" pod="kube-system/kube-apiserver-ci-4344.1.1-a-ba147b1783" Jul 6 23:46:50.767555 kubelet[3343]: E0706 23:46:50.767489 3343 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344.1.1-a-ba147b1783\" already exists" pod="kube-system/kube-controller-manager-ci-4344.1.1-a-ba147b1783" Jul 6 23:46:50.791972 kubelet[3343]: I0706 23:46:50.791912 3343 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-a-ba147b1783" Jul 6 23:46:50.807805 kubelet[3343]: I0706 23:46:50.807731 3343 kubelet_node_status.go:124] "Node was previously registered" node="ci-4344.1.1-a-ba147b1783" Jul 6 23:46:50.807906 kubelet[3343]: I0706 23:46:50.807852 3343 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.1.1-a-ba147b1783" Jul 6 23:46:50.840369 kubelet[3343]: I0706 23:46:50.840168 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a081a88f61152832c52e85e48dcf9231-kubeconfig\") pod \"kube-scheduler-ci-4344.1.1-a-ba147b1783\" (UID: \"a081a88f61152832c52e85e48dcf9231\") " pod="kube-system/kube-scheduler-ci-4344.1.1-a-ba147b1783" Jul 6 23:46:50.840369 kubelet[3343]: I0706 23:46:50.840211 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d17cb6f16f61e7b6d5ad3590a29897ad-k8s-certs\") pod \"kube-apiserver-ci-4344.1.1-a-ba147b1783\" (UID: \"d17cb6f16f61e7b6d5ad3590a29897ad\") " pod="kube-system/kube-apiserver-ci-4344.1.1-a-ba147b1783" Jul 6 23:46:50.840369 kubelet[3343]: I0706 23:46:50.840226 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7b0000eb0ba63b05fd354c6af094af5d-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.1-a-ba147b1783\" (UID: \"7b0000eb0ba63b05fd354c6af094af5d\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-ba147b1783" Jul 6 23:46:50.840369 kubelet[3343]: I0706 23:46:50.840239 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7b0000eb0ba63b05fd354c6af094af5d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.1-a-ba147b1783\" (UID: \"7b0000eb0ba63b05fd354c6af094af5d\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-ba147b1783" Jul 6 23:46:50.840369 kubelet[3343]: I0706 23:46:50.840253 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d17cb6f16f61e7b6d5ad3590a29897ad-ca-certs\") pod \"kube-apiserver-ci-4344.1.1-a-ba147b1783\" (UID: \"d17cb6f16f61e7b6d5ad3590a29897ad\") " pod="kube-system/kube-apiserver-ci-4344.1.1-a-ba147b1783" Jul 6 23:46:50.840571 kubelet[3343]: I0706 23:46:50.840263 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d17cb6f16f61e7b6d5ad3590a29897ad-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.1-a-ba147b1783\" (UID: \"d17cb6f16f61e7b6d5ad3590a29897ad\") " pod="kube-system/kube-apiserver-ci-4344.1.1-a-ba147b1783" Jul 6 23:46:50.840571 kubelet[3343]: I0706 23:46:50.840272 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7b0000eb0ba63b05fd354c6af094af5d-ca-certs\") pod \"kube-controller-manager-ci-4344.1.1-a-ba147b1783\" (UID: \"7b0000eb0ba63b05fd354c6af094af5d\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-ba147b1783" Jul 6 23:46:50.840571 kubelet[3343]: I0706 23:46:50.840283 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7b0000eb0ba63b05fd354c6af094af5d-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.1-a-ba147b1783\" (UID: \"7b0000eb0ba63b05fd354c6af094af5d\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-ba147b1783" Jul 6 23:46:50.840571 kubelet[3343]: I0706 23:46:50.840293 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b0000eb0ba63b05fd354c6af094af5d-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.1-a-ba147b1783\" (UID: \"7b0000eb0ba63b05fd354c6af094af5d\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-ba147b1783" Jul 6 23:46:51.617259 kubelet[3343]: I0706 23:46:51.617007 3343 apiserver.go:52] "Watching apiserver" Jul 6 23:46:51.632795 kubelet[3343]: I0706 23:46:51.632763 3343 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:46:51.671788 kubelet[3343]: I0706 23:46:51.671761 3343 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.1-a-ba147b1783" Jul 6 23:46:51.672620 kubelet[3343]: I0706 23:46:51.672421 3343 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-a-ba147b1783" Jul 6 23:46:51.684930 kubelet[3343]: W0706 23:46:51.684902 3343 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:46:51.685027 kubelet[3343]: E0706 23:46:51.684955 3343 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.1-a-ba147b1783\" already exists" pod="kube-system/kube-apiserver-ci-4344.1.1-a-ba147b1783" Jul 6 23:46:51.685696 kubelet[3343]: W0706 23:46:51.685668 3343 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:46:51.685985 kubelet[3343]: E0706 23:46:51.685782 3343 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.1-a-ba147b1783\" already exists" pod="kube-system/kube-scheduler-ci-4344.1.1-a-ba147b1783" Jul 6 23:46:51.704182 kubelet[3343]: I0706 23:46:51.704123 3343 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344.1.1-a-ba147b1783" podStartSLOduration=3.704110148 podStartE2EDuration="3.704110148s" podCreationTimestamp="2025-07-06 23:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:46:51.704039874 +0000 UTC m=+1.144041866" watchObservedRunningTime="2025-07-06 23:46:51.704110148 +0000 UTC m=+1.144112140" Jul 6 23:46:51.704445 kubelet[3343]: I0706 23:46:51.704207 3343 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344.1.1-a-ba147b1783" podStartSLOduration=3.704203543 podStartE2EDuration="3.704203543s" podCreationTimestamp="2025-07-06 23:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:46:51.69507331 +0000 UTC m=+1.135075302" watchObservedRunningTime="2025-07-06 23:46:51.704203543 +0000 UTC m=+1.144205535" Jul 6 23:46:51.731690 kubelet[3343]: I0706 23:46:51.731398 3343 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344.1.1-a-ba147b1783" podStartSLOduration=3.731381412 podStartE2EDuration="3.731381412s" podCreationTimestamp="2025-07-06 23:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:46:51.720951237 +0000 UTC m=+1.160953229" watchObservedRunningTime="2025-07-06 23:46:51.731381412 +0000 UTC m=+1.171383404" Jul 6 23:46:56.973358 kubelet[3343]: I0706 23:46:56.973306 3343 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:46:57.332695 kubelet[3343]: I0706 23:46:56.974173 3343 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:46:57.332782 containerd[1874]: time="2025-07-06T23:46:56.973985273Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:46:57.885220 systemd[1]: Created slice kubepods-besteffort-pod7c6f8479_5860_427f_8bd3_3b75a59f9df2.slice - libcontainer container kubepods-besteffort-pod7c6f8479_5860_427f_8bd3_3b75a59f9df2.slice. Jul 6 23:46:57.978331 kubelet[3343]: I0706 23:46:57.978286 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7c6f8479-5860-427f-8bd3-3b75a59f9df2-kube-proxy\") pod \"kube-proxy-kxqkd\" (UID: \"7c6f8479-5860-427f-8bd3-3b75a59f9df2\") " pod="kube-system/kube-proxy-kxqkd" Jul 6 23:46:57.978331 kubelet[3343]: I0706 23:46:57.978333 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c6f8479-5860-427f-8bd3-3b75a59f9df2-lib-modules\") pod \"kube-proxy-kxqkd\" (UID: \"7c6f8479-5860-427f-8bd3-3b75a59f9df2\") " pod="kube-system/kube-proxy-kxqkd" Jul 6 23:46:57.978331 kubelet[3343]: I0706 23:46:57.978347 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmllz\" (UniqueName: \"kubernetes.io/projected/7c6f8479-5860-427f-8bd3-3b75a59f9df2-kube-api-access-lmllz\") pod \"kube-proxy-kxqkd\" (UID: \"7c6f8479-5860-427f-8bd3-3b75a59f9df2\") " pod="kube-system/kube-proxy-kxqkd" Jul 6 23:46:57.978715 kubelet[3343]: I0706 23:46:57.978363 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c6f8479-5860-427f-8bd3-3b75a59f9df2-xtables-lock\") pod \"kube-proxy-kxqkd\" (UID: \"7c6f8479-5860-427f-8bd3-3b75a59f9df2\") " pod="kube-system/kube-proxy-kxqkd" Jul 6 23:46:58.042615 systemd[1]: Created slice kubepods-besteffort-poda561581b_bbec_4405_9a7f_65e3a5980d1f.slice - libcontainer container kubepods-besteffort-poda561581b_bbec_4405_9a7f_65e3a5980d1f.slice. Jul 6 23:46:58.079796 kubelet[3343]: I0706 23:46:58.079412 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a561581b-bbec-4405-9a7f-65e3a5980d1f-var-lib-calico\") pod \"tigera-operator-747864d56d-snhm4\" (UID: \"a561581b-bbec-4405-9a7f-65e3a5980d1f\") " pod="tigera-operator/tigera-operator-747864d56d-snhm4" Jul 6 23:46:58.079796 kubelet[3343]: I0706 23:46:58.079452 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh4v2\" (UniqueName: \"kubernetes.io/projected/a561581b-bbec-4405-9a7f-65e3a5980d1f-kube-api-access-kh4v2\") pod \"tigera-operator-747864d56d-snhm4\" (UID: \"a561581b-bbec-4405-9a7f-65e3a5980d1f\") " pod="tigera-operator/tigera-operator-747864d56d-snhm4" Jul 6 23:46:58.192079 containerd[1874]: time="2025-07-06T23:46:58.191967349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kxqkd,Uid:7c6f8479-5860-427f-8bd3-3b75a59f9df2,Namespace:kube-system,Attempt:0,}" Jul 6 23:46:58.345852 containerd[1874]: time="2025-07-06T23:46:58.345807635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-snhm4,Uid:a561581b-bbec-4405-9a7f-65e3a5980d1f,Namespace:tigera-operator,Attempt:0,}" Jul 6 23:46:58.683002 containerd[1874]: time="2025-07-06T23:46:58.682966663Z" level=info msg="connecting to shim 99655a488a1f92130004d437e3a4d44a4048ec1f655ae53f060cf2aac264ffc8" address="unix:///run/containerd/s/ac80d373baca46dc1e2c37b8c0e9900a4ec6d2dadc936737e618bd324b3604a7" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:46:58.702881 systemd[1]: Started cri-containerd-99655a488a1f92130004d437e3a4d44a4048ec1f655ae53f060cf2aac264ffc8.scope - libcontainer container 99655a488a1f92130004d437e3a4d44a4048ec1f655ae53f060cf2aac264ffc8. Jul 6 23:46:58.787448 containerd[1874]: time="2025-07-06T23:46:58.787394244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kxqkd,Uid:7c6f8479-5860-427f-8bd3-3b75a59f9df2,Namespace:kube-system,Attempt:0,} returns sandbox id \"99655a488a1f92130004d437e3a4d44a4048ec1f655ae53f060cf2aac264ffc8\"" Jul 6 23:46:58.790275 containerd[1874]: time="2025-07-06T23:46:58.790242400Z" level=info msg="CreateContainer within sandbox \"99655a488a1f92130004d437e3a4d44a4048ec1f655ae53f060cf2aac264ffc8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:46:59.139466 containerd[1874]: time="2025-07-06T23:46:59.139378499Z" level=info msg="connecting to shim 4ce822cc654a70e5bbf62ca978492b035c6add87c70e7c39720e0098304ed233" address="unix:///run/containerd/s/f5438761c27de64ce8e78f9f5c5ad3e32719849c66c99e1cd652d862c72d997f" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:46:59.158866 systemd[1]: Started cri-containerd-4ce822cc654a70e5bbf62ca978492b035c6add87c70e7c39720e0098304ed233.scope - libcontainer container 4ce822cc654a70e5bbf62ca978492b035c6add87c70e7c39720e0098304ed233. Jul 6 23:46:59.185862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1631829327.mount: Deactivated successfully. Jul 6 23:46:59.187335 containerd[1874]: time="2025-07-06T23:46:59.186256730Z" level=info msg="Container f6064389d0002ad6f56b3887100e9ca0156820968828849e10c4693c6a843f2d: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:46:59.187528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount270197670.mount: Deactivated successfully. Jul 6 23:46:59.227142 containerd[1874]: time="2025-07-06T23:46:59.227088727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-snhm4,Uid:a561581b-bbec-4405-9a7f-65e3a5980d1f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4ce822cc654a70e5bbf62ca978492b035c6add87c70e7c39720e0098304ed233\"" Jul 6 23:46:59.229804 containerd[1874]: time="2025-07-06T23:46:59.229771349Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 6 23:46:59.390701 containerd[1874]: time="2025-07-06T23:46:59.390580567Z" level=info msg="CreateContainer within sandbox \"99655a488a1f92130004d437e3a4d44a4048ec1f655ae53f060cf2aac264ffc8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f6064389d0002ad6f56b3887100e9ca0156820968828849e10c4693c6a843f2d\"" Jul 6 23:46:59.392084 containerd[1874]: time="2025-07-06T23:46:59.392056667Z" level=info msg="StartContainer for \"f6064389d0002ad6f56b3887100e9ca0156820968828849e10c4693c6a843f2d\"" Jul 6 23:46:59.393223 containerd[1874]: time="2025-07-06T23:46:59.393197132Z" level=info msg="connecting to shim f6064389d0002ad6f56b3887100e9ca0156820968828849e10c4693c6a843f2d" address="unix:///run/containerd/s/ac80d373baca46dc1e2c37b8c0e9900a4ec6d2dadc936737e618bd324b3604a7" protocol=ttrpc version=3 Jul 6 23:46:59.408859 systemd[1]: Started cri-containerd-f6064389d0002ad6f56b3887100e9ca0156820968828849e10c4693c6a843f2d.scope - libcontainer container f6064389d0002ad6f56b3887100e9ca0156820968828849e10c4693c6a843f2d. Jul 6 23:46:59.438194 containerd[1874]: time="2025-07-06T23:46:59.438095689Z" level=info msg="StartContainer for \"f6064389d0002ad6f56b3887100e9ca0156820968828849e10c4693c6a843f2d\" returns successfully" Jul 6 23:46:59.701698 kubelet[3343]: I0706 23:46:59.701566 3343 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kxqkd" podStartSLOduration=2.701548828 podStartE2EDuration="2.701548828s" podCreationTimestamp="2025-07-06 23:46:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:46:59.700891385 +0000 UTC m=+9.140893377" watchObservedRunningTime="2025-07-06 23:46:59.701548828 +0000 UTC m=+9.141550820" Jul 6 23:47:04.360076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount355308967.mount: Deactivated successfully. Jul 6 23:47:05.186967 containerd[1874]: time="2025-07-06T23:47:05.186914845Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:05.189853 containerd[1874]: time="2025-07-06T23:47:05.189826412Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 6 23:47:05.233141 containerd[1874]: time="2025-07-06T23:47:05.233077390Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:05.279026 containerd[1874]: time="2025-07-06T23:47:05.278944208Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:05.279625 containerd[1874]: time="2025-07-06T23:47:05.279593082Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 6.04979134s" Jul 6 23:47:05.279775 containerd[1874]: time="2025-07-06T23:47:05.279694772Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 6 23:47:05.281853 containerd[1874]: time="2025-07-06T23:47:05.281826342Z" level=info msg="CreateContainer within sandbox \"4ce822cc654a70e5bbf62ca978492b035c6add87c70e7c39720e0098304ed233\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 6 23:47:05.440299 containerd[1874]: time="2025-07-06T23:47:05.439815004Z" level=info msg="Container 4391d439869085def86df5f191cf5d3fd04311a23679502bfa79fca26a0396ff: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:47:05.536450 containerd[1874]: time="2025-07-06T23:47:05.536393442Z" level=info msg="CreateContainer within sandbox \"4ce822cc654a70e5bbf62ca978492b035c6add87c70e7c39720e0098304ed233\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4391d439869085def86df5f191cf5d3fd04311a23679502bfa79fca26a0396ff\"" Jul 6 23:47:05.537339 containerd[1874]: time="2025-07-06T23:47:05.537288059Z" level=info msg="StartContainer for \"4391d439869085def86df5f191cf5d3fd04311a23679502bfa79fca26a0396ff\"" Jul 6 23:47:05.538160 containerd[1874]: time="2025-07-06T23:47:05.538114153Z" level=info msg="connecting to shim 4391d439869085def86df5f191cf5d3fd04311a23679502bfa79fca26a0396ff" address="unix:///run/containerd/s/f5438761c27de64ce8e78f9f5c5ad3e32719849c66c99e1cd652d862c72d997f" protocol=ttrpc version=3 Jul 6 23:47:05.555869 systemd[1]: Started cri-containerd-4391d439869085def86df5f191cf5d3fd04311a23679502bfa79fca26a0396ff.scope - libcontainer container 4391d439869085def86df5f191cf5d3fd04311a23679502bfa79fca26a0396ff. Jul 6 23:47:05.580576 containerd[1874]: time="2025-07-06T23:47:05.580541268Z" level=info msg="StartContainer for \"4391d439869085def86df5f191cf5d3fd04311a23679502bfa79fca26a0396ff\" returns successfully" Jul 6 23:47:10.636452 sudo[2332]: pam_unix(sudo:session): session closed for user root Jul 6 23:47:10.721411 sshd[2331]: Connection closed by 10.200.16.10 port 44984 Jul 6 23:47:10.721996 sshd-session[2329]: pam_unix(sshd:session): session closed for user core Jul 6 23:47:10.725085 systemd[1]: sshd@6-10.200.20.37:22-10.200.16.10:44984.service: Deactivated successfully. Jul 6 23:47:10.729130 systemd[1]: session-9.scope: Deactivated successfully. Jul 6 23:47:10.729281 systemd[1]: session-9.scope: Consumed 3.049s CPU time, 228.6M memory peak. Jul 6 23:47:10.730489 systemd-logind[1858]: Session 9 logged out. Waiting for processes to exit. Jul 6 23:47:10.732181 systemd-logind[1858]: Removed session 9. Jul 6 23:47:15.883633 kubelet[3343]: I0706 23:47:15.882382 3343 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-snhm4" podStartSLOduration=11.830218899 podStartE2EDuration="17.882366913s" podCreationTimestamp="2025-07-06 23:46:58 +0000 UTC" firstStartedPulling="2025-07-06 23:46:59.228381773 +0000 UTC m=+8.668383765" lastFinishedPulling="2025-07-06 23:47:05.280529787 +0000 UTC m=+14.720531779" observedRunningTime="2025-07-06 23:47:05.710438454 +0000 UTC m=+15.150440446" watchObservedRunningTime="2025-07-06 23:47:15.882366913 +0000 UTC m=+25.322368905" Jul 6 23:47:15.891310 systemd[1]: Created slice kubepods-besteffort-pod8140fb97_bf61_4574_b8f9_2fc722ad30f0.slice - libcontainer container kubepods-besteffort-pod8140fb97_bf61_4574_b8f9_2fc722ad30f0.slice. Jul 6 23:47:15.977841 kubelet[3343]: I0706 23:47:15.977706 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8140fb97-bf61-4574-b8f9-2fc722ad30f0-tigera-ca-bundle\") pod \"calico-typha-67d9cb7c66-vdqlt\" (UID: \"8140fb97-bf61-4574-b8f9-2fc722ad30f0\") " pod="calico-system/calico-typha-67d9cb7c66-vdqlt" Jul 6 23:47:15.977841 kubelet[3343]: I0706 23:47:15.977851 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8140fb97-bf61-4574-b8f9-2fc722ad30f0-typha-certs\") pod \"calico-typha-67d9cb7c66-vdqlt\" (UID: \"8140fb97-bf61-4574-b8f9-2fc722ad30f0\") " pod="calico-system/calico-typha-67d9cb7c66-vdqlt" Jul 6 23:47:15.977841 kubelet[3343]: I0706 23:47:15.977868 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97hzt\" (UniqueName: \"kubernetes.io/projected/8140fb97-bf61-4574-b8f9-2fc722ad30f0-kube-api-access-97hzt\") pod \"calico-typha-67d9cb7c66-vdqlt\" (UID: \"8140fb97-bf61-4574-b8f9-2fc722ad30f0\") " pod="calico-system/calico-typha-67d9cb7c66-vdqlt" Jul 6 23:47:15.986639 systemd[1]: Created slice kubepods-besteffort-pod06fdd1bd_1b36_4537_83e1_74b186846a9c.slice - libcontainer container kubepods-besteffort-pod06fdd1bd_1b36_4537_83e1_74b186846a9c.slice. Jul 6 23:47:15.989993 kubelet[3343]: I0706 23:47:15.989967 3343 status_manager.go:890] "Failed to get status for pod" podUID="06fdd1bd-1b36-4537-83e1-74b186846a9c" pod="calico-system/calico-node-trht4" err="pods \"calico-node-trht4\" is forbidden: User \"system:node:ci-4344.1.1-a-ba147b1783\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4344.1.1-a-ba147b1783' and this object" Jul 6 23:47:16.079702 kubelet[3343]: I0706 23:47:16.078279 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06fdd1bd-1b36-4537-83e1-74b186846a9c-tigera-ca-bundle\") pod \"calico-node-trht4\" (UID: \"06fdd1bd-1b36-4537-83e1-74b186846a9c\") " pod="calico-system/calico-node-trht4" Jul 6 23:47:16.079702 kubelet[3343]: I0706 23:47:16.078315 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm6rp\" (UniqueName: \"kubernetes.io/projected/06fdd1bd-1b36-4537-83e1-74b186846a9c-kube-api-access-wm6rp\") pod \"calico-node-trht4\" (UID: \"06fdd1bd-1b36-4537-83e1-74b186846a9c\") " pod="calico-system/calico-node-trht4" Jul 6 23:47:16.079702 kubelet[3343]: I0706 23:47:16.078330 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/06fdd1bd-1b36-4537-83e1-74b186846a9c-cni-bin-dir\") pod \"calico-node-trht4\" (UID: \"06fdd1bd-1b36-4537-83e1-74b186846a9c\") " pod="calico-system/calico-node-trht4" Jul 6 23:47:16.079702 kubelet[3343]: I0706 23:47:16.078339 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/06fdd1bd-1b36-4537-83e1-74b186846a9c-policysync\") pod \"calico-node-trht4\" (UID: \"06fdd1bd-1b36-4537-83e1-74b186846a9c\") " pod="calico-system/calico-node-trht4" Jul 6 23:47:16.079702 kubelet[3343]: I0706 23:47:16.078352 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06fdd1bd-1b36-4537-83e1-74b186846a9c-lib-modules\") pod \"calico-node-trht4\" (UID: \"06fdd1bd-1b36-4537-83e1-74b186846a9c\") " pod="calico-system/calico-node-trht4" Jul 6 23:47:16.079916 kubelet[3343]: I0706 23:47:16.078361 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/06fdd1bd-1b36-4537-83e1-74b186846a9c-node-certs\") pod \"calico-node-trht4\" (UID: \"06fdd1bd-1b36-4537-83e1-74b186846a9c\") " pod="calico-system/calico-node-trht4" Jul 6 23:47:16.079916 kubelet[3343]: I0706 23:47:16.078371 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/06fdd1bd-1b36-4537-83e1-74b186846a9c-cni-net-dir\") pod \"calico-node-trht4\" (UID: \"06fdd1bd-1b36-4537-83e1-74b186846a9c\") " pod="calico-system/calico-node-trht4" Jul 6 23:47:16.079916 kubelet[3343]: I0706 23:47:16.078381 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/06fdd1bd-1b36-4537-83e1-74b186846a9c-xtables-lock\") pod \"calico-node-trht4\" (UID: \"06fdd1bd-1b36-4537-83e1-74b186846a9c\") " pod="calico-system/calico-node-trht4" Jul 6 23:47:16.079916 kubelet[3343]: I0706 23:47:16.078390 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/06fdd1bd-1b36-4537-83e1-74b186846a9c-cni-log-dir\") pod \"calico-node-trht4\" (UID: \"06fdd1bd-1b36-4537-83e1-74b186846a9c\") " pod="calico-system/calico-node-trht4" Jul 6 23:47:16.079916 kubelet[3343]: I0706 23:47:16.078398 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/06fdd1bd-1b36-4537-83e1-74b186846a9c-var-lib-calico\") pod \"calico-node-trht4\" (UID: \"06fdd1bd-1b36-4537-83e1-74b186846a9c\") " pod="calico-system/calico-node-trht4" Jul 6 23:47:16.079991 kubelet[3343]: I0706 23:47:16.078407 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/06fdd1bd-1b36-4537-83e1-74b186846a9c-flexvol-driver-host\") pod \"calico-node-trht4\" (UID: \"06fdd1bd-1b36-4537-83e1-74b186846a9c\") " pod="calico-system/calico-node-trht4" Jul 6 23:47:16.079991 kubelet[3343]: I0706 23:47:16.078426 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/06fdd1bd-1b36-4537-83e1-74b186846a9c-var-run-calico\") pod \"calico-node-trht4\" (UID: \"06fdd1bd-1b36-4537-83e1-74b186846a9c\") " pod="calico-system/calico-node-trht4" Jul 6 23:47:16.140102 kubelet[3343]: E0706 23:47:16.139577 3343 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tnbxw" podUID="4e241bd5-4cc4-4ff9-83ce-48a34c457465" Jul 6 23:47:16.178750 kubelet[3343]: I0706 23:47:16.178712 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4e241bd5-4cc4-4ff9-83ce-48a34c457465-registration-dir\") pod \"csi-node-driver-tnbxw\" (UID: \"4e241bd5-4cc4-4ff9-83ce-48a34c457465\") " pod="calico-system/csi-node-driver-tnbxw" Jul 6 23:47:16.179758 kubelet[3343]: I0706 23:47:16.179137 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4e241bd5-4cc4-4ff9-83ce-48a34c457465-kubelet-dir\") pod \"csi-node-driver-tnbxw\" (UID: \"4e241bd5-4cc4-4ff9-83ce-48a34c457465\") " pod="calico-system/csi-node-driver-tnbxw" Jul 6 23:47:16.179758 kubelet[3343]: I0706 23:47:16.179477 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4e241bd5-4cc4-4ff9-83ce-48a34c457465-socket-dir\") pod \"csi-node-driver-tnbxw\" (UID: \"4e241bd5-4cc4-4ff9-83ce-48a34c457465\") " pod="calico-system/csi-node-driver-tnbxw" Jul 6 23:47:16.179758 kubelet[3343]: I0706 23:47:16.179539 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzlw6\" (UniqueName: \"kubernetes.io/projected/4e241bd5-4cc4-4ff9-83ce-48a34c457465-kube-api-access-jzlw6\") pod \"csi-node-driver-tnbxw\" (UID: \"4e241bd5-4cc4-4ff9-83ce-48a34c457465\") " pod="calico-system/csi-node-driver-tnbxw" Jul 6 23:47:16.179758 kubelet[3343]: I0706 23:47:16.179555 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4e241bd5-4cc4-4ff9-83ce-48a34c457465-varrun\") pod \"csi-node-driver-tnbxw\" (UID: \"4e241bd5-4cc4-4ff9-83ce-48a34c457465\") " pod="calico-system/csi-node-driver-tnbxw" Jul 6 23:47:16.180488 kubelet[3343]: E0706 23:47:16.180472 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.180651 kubelet[3343]: W0706 23:47:16.180604 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.180799 kubelet[3343]: E0706 23:47:16.180780 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.181007 kubelet[3343]: E0706 23:47:16.180997 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.181134 kubelet[3343]: W0706 23:47:16.181054 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.181134 kubelet[3343]: E0706 23:47:16.181074 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.181258 kubelet[3343]: E0706 23:47:16.181248 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.181311 kubelet[3343]: W0706 23:47:16.181301 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.181466 kubelet[3343]: E0706 23:47:16.181453 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.181819 kubelet[3343]: E0706 23:47:16.181796 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.181874 kubelet[3343]: W0706 23:47:16.181816 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.181874 kubelet[3343]: E0706 23:47:16.181840 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.182152 kubelet[3343]: E0706 23:47:16.182132 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.182152 kubelet[3343]: W0706 23:47:16.182145 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.182346 kubelet[3343]: E0706 23:47:16.182326 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.182495 kubelet[3343]: E0706 23:47:16.182478 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.182495 kubelet[3343]: W0706 23:47:16.182492 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.182606 kubelet[3343]: E0706 23:47:16.182586 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.182787 kubelet[3343]: E0706 23:47:16.182772 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.182787 kubelet[3343]: W0706 23:47:16.182784 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.183084 kubelet[3343]: E0706 23:47:16.183054 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.183446 kubelet[3343]: E0706 23:47:16.183428 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.183446 kubelet[3343]: W0706 23:47:16.183441 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.183733 kubelet[3343]: E0706 23:47:16.183711 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.184072 kubelet[3343]: E0706 23:47:16.184050 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.184072 kubelet[3343]: W0706 23:47:16.184066 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.184193 kubelet[3343]: E0706 23:47:16.184171 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.184852 kubelet[3343]: E0706 23:47:16.184832 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.184852 kubelet[3343]: W0706 23:47:16.184848 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.184998 kubelet[3343]: E0706 23:47:16.184958 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.185133 kubelet[3343]: E0706 23:47:16.185118 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.185167 kubelet[3343]: W0706 23:47:16.185133 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.185226 kubelet[3343]: E0706 23:47:16.185211 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.185355 kubelet[3343]: E0706 23:47:16.185339 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.185355 kubelet[3343]: W0706 23:47:16.185353 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.185439 kubelet[3343]: E0706 23:47:16.185386 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.186714 kubelet[3343]: E0706 23:47:16.185896 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.186714 kubelet[3343]: W0706 23:47:16.185910 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.186969 kubelet[3343]: E0706 23:47:16.186939 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.187025 kubelet[3343]: E0706 23:47:16.187011 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.187044 kubelet[3343]: W0706 23:47:16.187023 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.187116 kubelet[3343]: E0706 23:47:16.187103 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.187195 kubelet[3343]: E0706 23:47:16.187182 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.187195 kubelet[3343]: W0706 23:47:16.187190 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.187335 kubelet[3343]: E0706 23:47:16.187271 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.187518 kubelet[3343]: E0706 23:47:16.187495 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.187518 kubelet[3343]: W0706 23:47:16.187510 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.187681 kubelet[3343]: E0706 23:47:16.187646 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.187809 kubelet[3343]: E0706 23:47:16.187793 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.187809 kubelet[3343]: W0706 23:47:16.187806 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.187992 kubelet[3343]: E0706 23:47:16.187974 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.188460 kubelet[3343]: E0706 23:47:16.188439 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.188460 kubelet[3343]: W0706 23:47:16.188455 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.188591 kubelet[3343]: E0706 23:47:16.188548 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.188788 kubelet[3343]: E0706 23:47:16.188771 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.188788 kubelet[3343]: W0706 23:47:16.188785 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.188900 kubelet[3343]: E0706 23:47:16.188876 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.189031 kubelet[3343]: E0706 23:47:16.189017 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.189031 kubelet[3343]: W0706 23:47:16.189028 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.189179 kubelet[3343]: E0706 23:47:16.189157 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.189369 kubelet[3343]: E0706 23:47:16.189355 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.189369 kubelet[3343]: W0706 23:47:16.189366 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.190090 kubelet[3343]: E0706 23:47:16.190066 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.190213 kubelet[3343]: E0706 23:47:16.190196 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.190213 kubelet[3343]: W0706 23:47:16.190206 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.190826 kubelet[3343]: E0706 23:47:16.190610 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.191083 kubelet[3343]: E0706 23:47:16.191060 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.191083 kubelet[3343]: W0706 23:47:16.191076 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.191445 kubelet[3343]: E0706 23:47:16.191413 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.191543 kubelet[3343]: E0706 23:47:16.191528 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.191543 kubelet[3343]: W0706 23:47:16.191538 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.191635 kubelet[3343]: E0706 23:47:16.191612 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.192332 kubelet[3343]: E0706 23:47:16.192313 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.192332 kubelet[3343]: W0706 23:47:16.192326 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.192497 kubelet[3343]: E0706 23:47:16.192410 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.192590 kubelet[3343]: E0706 23:47:16.192573 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.192590 kubelet[3343]: W0706 23:47:16.192586 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.192674 kubelet[3343]: E0706 23:47:16.192658 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.192795 kubelet[3343]: E0706 23:47:16.192781 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.192795 kubelet[3343]: W0706 23:47:16.192790 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.192875 kubelet[3343]: E0706 23:47:16.192862 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.193135 kubelet[3343]: E0706 23:47:16.193114 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.193135 kubelet[3343]: W0706 23:47:16.193126 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.193216 kubelet[3343]: E0706 23:47:16.193200 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.194708 kubelet[3343]: E0706 23:47:16.194683 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.194708 kubelet[3343]: W0706 23:47:16.194703 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.196052 kubelet[3343]: E0706 23:47:16.196029 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.196052 kubelet[3343]: W0706 23:47:16.196045 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.196391 kubelet[3343]: E0706 23:47:16.196369 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.196391 kubelet[3343]: W0706 23:47:16.196382 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.196528 kubelet[3343]: E0706 23:47:16.196512 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.196528 kubelet[3343]: W0706 23:47:16.196523 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.196591 kubelet[3343]: E0706 23:47:16.196533 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.199908 kubelet[3343]: E0706 23:47:16.199883 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.199975 kubelet[3343]: E0706 23:47:16.199926 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.199975 kubelet[3343]: E0706 23:47:16.199940 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.200016 kubelet[3343]: E0706 23:47:16.199980 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.200016 kubelet[3343]: W0706 23:47:16.199986 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.200016 kubelet[3343]: E0706 23:47:16.200005 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.200764 kubelet[3343]: E0706 23:47:16.200187 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.200764 kubelet[3343]: W0706 23:47:16.200198 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.200764 kubelet[3343]: E0706 23:47:16.200208 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.200764 kubelet[3343]: E0706 23:47:16.200329 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.200764 kubelet[3343]: W0706 23:47:16.200335 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.200764 kubelet[3343]: E0706 23:47:16.200341 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.201340 kubelet[3343]: E0706 23:47:16.200979 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.201340 kubelet[3343]: W0706 23:47:16.200993 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.201340 kubelet[3343]: E0706 23:47:16.201002 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.201340 kubelet[3343]: E0706 23:47:16.201116 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.201340 kubelet[3343]: W0706 23:47:16.201121 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.201340 kubelet[3343]: E0706 23:47:16.201129 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.202534 kubelet[3343]: E0706 23:47:16.201637 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.202534 kubelet[3343]: W0706 23:47:16.201652 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.202534 kubelet[3343]: E0706 23:47:16.201661 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.202668 containerd[1874]: time="2025-07-06T23:47:16.202227130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67d9cb7c66-vdqlt,Uid:8140fb97-bf61-4574-b8f9-2fc722ad30f0,Namespace:calico-system,Attempt:0,}" Jul 6 23:47:16.225755 kubelet[3343]: E0706 23:47:16.224515 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.225755 kubelet[3343]: W0706 23:47:16.224531 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.225907 kubelet[3343]: E0706 23:47:16.224549 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.264904 containerd[1874]: time="2025-07-06T23:47:16.264862461Z" level=info msg="connecting to shim 2e7d15c847c3b4cb117509633877c1b51c90217bd621e8fde5dadb5245031c1f" address="unix:///run/containerd/s/5495f2b63e8a33ff9a5dd0c3d5cb13a24a430f6056ca4927a529cc9244f9870e" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:47:16.281436 kubelet[3343]: E0706 23:47:16.281370 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.282161 kubelet[3343]: W0706 23:47:16.281771 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.282161 kubelet[3343]: E0706 23:47:16.281799 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.283161 kubelet[3343]: E0706 23:47:16.283103 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.283571 kubelet[3343]: W0706 23:47:16.283424 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.283571 kubelet[3343]: E0706 23:47:16.283445 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.284388 kubelet[3343]: E0706 23:47:16.284367 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.284388 kubelet[3343]: W0706 23:47:16.284380 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.285167 kubelet[3343]: E0706 23:47:16.284470 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.285167 kubelet[3343]: E0706 23:47:16.284478 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.285167 kubelet[3343]: W0706 23:47:16.284534 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.285167 kubelet[3343]: E0706 23:47:16.284545 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.285762 kubelet[3343]: E0706 23:47:16.285363 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.285762 kubelet[3343]: W0706 23:47:16.285377 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.285762 kubelet[3343]: E0706 23:47:16.285388 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.285762 kubelet[3343]: E0706 23:47:16.285527 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.285762 kubelet[3343]: W0706 23:47:16.285534 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.285762 kubelet[3343]: E0706 23:47:16.285542 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.286499 kubelet[3343]: E0706 23:47:16.286481 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.286499 kubelet[3343]: W0706 23:47:16.286495 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.287307 kubelet[3343]: E0706 23:47:16.286509 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.287307 kubelet[3343]: E0706 23:47:16.286840 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.287307 kubelet[3343]: W0706 23:47:16.286850 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.287307 kubelet[3343]: E0706 23:47:16.286937 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.287307 kubelet[3343]: E0706 23:47:16.286999 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.287307 kubelet[3343]: W0706 23:47:16.287010 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.287307 kubelet[3343]: E0706 23:47:16.287079 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.287307 kubelet[3343]: E0706 23:47:16.287142 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.287307 kubelet[3343]: W0706 23:47:16.287152 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.287307 kubelet[3343]: E0706 23:47:16.287219 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.287451 kubelet[3343]: E0706 23:47:16.287292 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.287451 kubelet[3343]: W0706 23:47:16.287296 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.287451 kubelet[3343]: E0706 23:47:16.287363 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.287496 kubelet[3343]: E0706 23:47:16.287469 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.287496 kubelet[3343]: W0706 23:47:16.287474 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.287496 kubelet[3343]: E0706 23:47:16.287480 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.288442 kubelet[3343]: E0706 23:47:16.287791 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.288442 kubelet[3343]: W0706 23:47:16.287803 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.288442 kubelet[3343]: E0706 23:47:16.287812 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.288442 kubelet[3343]: E0706 23:47:16.288026 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.288442 kubelet[3343]: W0706 23:47:16.288035 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.288442 kubelet[3343]: E0706 23:47:16.288052 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.288273 systemd[1]: Started cri-containerd-2e7d15c847c3b4cb117509633877c1b51c90217bd621e8fde5dadb5245031c1f.scope - libcontainer container 2e7d15c847c3b4cb117509633877c1b51c90217bd621e8fde5dadb5245031c1f. Jul 6 23:47:16.288713 kubelet[3343]: E0706 23:47:16.288683 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.288713 kubelet[3343]: W0706 23:47:16.288698 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.289100 kubelet[3343]: E0706 23:47:16.289015 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.290189 kubelet[3343]: E0706 23:47:16.290167 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.290189 kubelet[3343]: W0706 23:47:16.290184 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.290480 kubelet[3343]: E0706 23:47:16.290456 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.290692 kubelet[3343]: E0706 23:47:16.290588 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.290692 kubelet[3343]: W0706 23:47:16.290605 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.290692 kubelet[3343]: E0706 23:47:16.290676 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.291088 kubelet[3343]: E0706 23:47:16.290725 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.291088 kubelet[3343]: W0706 23:47:16.290729 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.291357 containerd[1874]: time="2025-07-06T23:47:16.291249282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-trht4,Uid:06fdd1bd-1b36-4537-83e1-74b186846a9c,Namespace:calico-system,Attempt:0,}" Jul 6 23:47:16.291584 kubelet[3343]: E0706 23:47:16.291566 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.291584 kubelet[3343]: W0706 23:47:16.291583 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.291815 kubelet[3343]: E0706 23:47:16.291594 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.291854 kubelet[3343]: E0706 23:47:16.291628 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.291970 kubelet[3343]: E0706 23:47:16.291867 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.291970 kubelet[3343]: W0706 23:47:16.291880 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.291970 kubelet[3343]: E0706 23:47:16.291889 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.292285 kubelet[3343]: E0706 23:47:16.292051 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.292285 kubelet[3343]: W0706 23:47:16.292056 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.292285 kubelet[3343]: E0706 23:47:16.292064 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.292285 kubelet[3343]: E0706 23:47:16.292240 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.292285 kubelet[3343]: W0706 23:47:16.292247 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.292285 kubelet[3343]: E0706 23:47:16.292260 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.292559 kubelet[3343]: E0706 23:47:16.292378 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.292559 kubelet[3343]: W0706 23:47:16.292386 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.292559 kubelet[3343]: E0706 23:47:16.292392 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.292559 kubelet[3343]: E0706 23:47:16.292485 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.292559 kubelet[3343]: W0706 23:47:16.292490 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.292559 kubelet[3343]: E0706 23:47:16.292495 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.292948 kubelet[3343]: E0706 23:47:16.292850 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.292948 kubelet[3343]: W0706 23:47:16.292861 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.292948 kubelet[3343]: E0706 23:47:16.292871 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.298853 kubelet[3343]: E0706 23:47:16.298833 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:16.298853 kubelet[3343]: W0706 23:47:16.298849 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:16.298943 kubelet[3343]: E0706 23:47:16.298862 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:16.334228 containerd[1874]: time="2025-07-06T23:47:16.334077235Z" level=info msg="connecting to shim 06c9ea9e66a13f449e8baf58843725883f57385df52d12e30a8e8ebc5ef0b238" address="unix:///run/containerd/s/05a2a4310bd7a05319638424c875d2be8b0bb07e9ce6eb8c1915e1bf5ecb2b9f" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:47:16.334953 containerd[1874]: time="2025-07-06T23:47:16.334923771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67d9cb7c66-vdqlt,Uid:8140fb97-bf61-4574-b8f9-2fc722ad30f0,Namespace:calico-system,Attempt:0,} returns sandbox id \"2e7d15c847c3b4cb117509633877c1b51c90217bd621e8fde5dadb5245031c1f\"" Jul 6 23:47:16.337649 containerd[1874]: time="2025-07-06T23:47:16.337618933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 6 23:47:16.356902 systemd[1]: Started cri-containerd-06c9ea9e66a13f449e8baf58843725883f57385df52d12e30a8e8ebc5ef0b238.scope - libcontainer container 06c9ea9e66a13f449e8baf58843725883f57385df52d12e30a8e8ebc5ef0b238. Jul 6 23:47:16.377441 containerd[1874]: time="2025-07-06T23:47:16.377404901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-trht4,Uid:06fdd1bd-1b36-4537-83e1-74b186846a9c,Namespace:calico-system,Attempt:0,} returns sandbox id \"06c9ea9e66a13f449e8baf58843725883f57385df52d12e30a8e8ebc5ef0b238\"" Jul 6 23:47:17.397684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1733049365.mount: Deactivated successfully. Jul 6 23:47:17.646793 kubelet[3343]: E0706 23:47:17.646730 3343 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tnbxw" podUID="4e241bd5-4cc4-4ff9-83ce-48a34c457465" Jul 6 23:47:17.936773 containerd[1874]: time="2025-07-06T23:47:17.936493784Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:17.940965 containerd[1874]: time="2025-07-06T23:47:17.940937043Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 6 23:47:17.947629 containerd[1874]: time="2025-07-06T23:47:17.947600704Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:17.952464 containerd[1874]: time="2025-07-06T23:47:17.952434194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:17.953031 containerd[1874]: time="2025-07-06T23:47:17.952702583Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.615058698s" Jul 6 23:47:17.953031 containerd[1874]: time="2025-07-06T23:47:17.952724784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 6 23:47:17.954237 containerd[1874]: time="2025-07-06T23:47:17.954216108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 6 23:47:17.965828 containerd[1874]: time="2025-07-06T23:47:17.965751179Z" level=info msg="CreateContainer within sandbox \"2e7d15c847c3b4cb117509633877c1b51c90217bd621e8fde5dadb5245031c1f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 6 23:47:17.998191 containerd[1874]: time="2025-07-06T23:47:17.998146425Z" level=info msg="Container 6c7ddca4f1223c1ff77972d0b439975ea122e39a58d33f81669bbb3356502d5a: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:47:18.027768 containerd[1874]: time="2025-07-06T23:47:18.027681625Z" level=info msg="CreateContainer within sandbox \"2e7d15c847c3b4cb117509633877c1b51c90217bd621e8fde5dadb5245031c1f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6c7ddca4f1223c1ff77972d0b439975ea122e39a58d33f81669bbb3356502d5a\"" Jul 6 23:47:18.028304 containerd[1874]: time="2025-07-06T23:47:18.028279212Z" level=info msg="StartContainer for \"6c7ddca4f1223c1ff77972d0b439975ea122e39a58d33f81669bbb3356502d5a\"" Jul 6 23:47:18.030040 containerd[1874]: time="2025-07-06T23:47:18.029839762Z" level=info msg="connecting to shim 6c7ddca4f1223c1ff77972d0b439975ea122e39a58d33f81669bbb3356502d5a" address="unix:///run/containerd/s/5495f2b63e8a33ff9a5dd0c3d5cb13a24a430f6056ca4927a529cc9244f9870e" protocol=ttrpc version=3 Jul 6 23:47:18.045865 systemd[1]: Started cri-containerd-6c7ddca4f1223c1ff77972d0b439975ea122e39a58d33f81669bbb3356502d5a.scope - libcontainer container 6c7ddca4f1223c1ff77972d0b439975ea122e39a58d33f81669bbb3356502d5a. Jul 6 23:47:18.078647 containerd[1874]: time="2025-07-06T23:47:18.078563360Z" level=info msg="StartContainer for \"6c7ddca4f1223c1ff77972d0b439975ea122e39a58d33f81669bbb3356502d5a\" returns successfully" Jul 6 23:47:18.772174 kubelet[3343]: E0706 23:47:18.772141 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:18.772174 kubelet[3343]: W0706 23:47:18.772164 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:18.772174 kubelet[3343]: E0706 23:47:18.772184 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:18.772552 kubelet[3343]: E0706 23:47:18.772299 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:18.772552 kubelet[3343]: W0706 23:47:18.772304 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:18.772552 kubelet[3343]: E0706 23:47:18.772330 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:18.772552 kubelet[3343]: E0706 23:47:18.772425 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:18.772552 kubelet[3343]: W0706 23:47:18.772438 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:18.772552 kubelet[3343]: E0706 23:47:18.772443 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:18.772552 kubelet[3343]: E0706 23:47:18.772526 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:18.772552 kubelet[3343]: W0706 23:47:18.772530 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:18.772552 kubelet[3343]: E0706 23:47:18.772535 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:18.772680 kubelet[3343]: E0706 23:47:18.772633 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:18.772680 kubelet[3343]: W0706 23:47:18.772637 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:18.772680 kubelet[3343]: E0706 23:47:18.772642 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:18.772724 kubelet[3343]: E0706 23:47:18.772718 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:18.772724 kubelet[3343]: W0706 23:47:18.772721 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:18.772775 kubelet[3343]: E0706 23:47:18.772726 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:18.772852 kubelet[3343]: E0706 23:47:18.772843 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:18.772852 kubelet[3343]: W0706 23:47:18.772850 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:18.772903 kubelet[3343]: E0706 23:47:18.772855 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:18.772949 kubelet[3343]: E0706 23:47:18.772937 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:18.772949 kubelet[3343]: W0706 23:47:18.772943 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:18.772949 kubelet[3343]: E0706 23:47:18.772947 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:18.773053 kubelet[3343]: E0706 23:47:18.773045 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:18.773053 kubelet[3343]: W0706 23:47:18.773051 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:18.773092 kubelet[3343]: E0706 23:47:18.773056 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:18.773144 kubelet[3343]: E0706 23:47:18.773135 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:18.773144 kubelet[3343]: W0706 23:47:18.773140 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:18.773193 kubelet[3343]: E0706 23:47:18.773146 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:18.773226 kubelet[3343]: E0706 23:47:18.773214 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:18.773226 kubelet[3343]: W0706 23:47:18.773220 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:18.773280 kubelet[3343]: E0706 23:47:18.773226 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:18.773308 kubelet[3343]: E0706 23:47:18.773305 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:18.773325 kubelet[3343]: W0706 23:47:18.773310 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:18.773325 kubelet[3343]: E0706 23:47:18.773314 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:18.773410 kubelet[3343]: E0706 23:47:18.773399 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:18.773410 kubelet[3343]: W0706 23:47:18.773405 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:18.773410 kubelet[3343]: E0706 23:47:18.773410 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:18.773503 kubelet[3343]: E0706 23:47:18.773492 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:18.773503 kubelet[3343]: W0706 23:47:18.773498 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:18.773503 kubelet[3343]: E0706 23:47:18.773502 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:18.773597 kubelet[3343]: E0706 23:47:18.773587 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:18.773597 kubelet[3343]: W0706 23:47:18.773594 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:18.773631 kubelet[3343]: E0706 23:47:18.773599 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:18.803933 kubelet[3343]: E0706 23:47:18.803909 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:18.803933 kubelet[3343]: W0706 23:47:18.803926 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:18.803933 kubelet[3343]: E0706 23:47:18.803939 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:18.804096 kubelet[3343]: E0706 23:47:18.804076 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:18.804096 kubelet[3343]: W0706 23:47:18.804086 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:18.804142 kubelet[3343]: E0706 23:47:18.804093 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:18.804224 kubelet[3343]: E0706 23:47:18.804213 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:18.804224 kubelet[3343]: W0706 23:47:18.804220 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:18.804383 kubelet[3343]: E0706 23:47:18.804227 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:18.804461 kubelet[3343]: E0706 23:47:18.804445 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:18.804506 kubelet[3343]: W0706 23:47:18.804496 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:18.804557 kubelet[3343]: E0706 23:47:18.804547 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:18.804772 kubelet[3343]: E0706 23:47:18.804721 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:18.804772 kubelet[3343]: W0706 23:47:18.804731 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:18.804772 kubelet[3343]: E0706 23:47:18.804749 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:18.804883 kubelet[3343]: E0706 23:47:18.804868 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:18.804883 kubelet[3343]: W0706 23:47:18.804877 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:18.804929 kubelet[3343]: E0706 23:47:18.804886 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:18.805001 kubelet[3343]: E0706 23:47:18.804988 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:18.805001 kubelet[3343]: W0706 23:47:18.804996 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:18.805001 kubelet[3343]: E0706 23:47:18.805002 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:18.805167 kubelet[3343]: E0706 23:47:18.805154 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:18.805167 kubelet[3343]: W0706 23:47:18.805164 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:18.805215 kubelet[3343]: E0706 23:47:18.805176 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:18.805388 kubelet[3343]: E0706 23:47:18.805376 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:18.805480 kubelet[3343]: W0706 23:47:18.805435 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:18.805480 kubelet[3343]: E0706 23:47:18.805459 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:18.805628 kubelet[3343]: E0706 23:47:18.805609 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:18.805628 kubelet[3343]: W0706 23:47:18.805622 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:18.805666 kubelet[3343]: E0706 23:47:18.805634 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:18.805763 kubelet[3343]: E0706 23:47:18.805748 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:18.805763 kubelet[3343]: W0706 23:47:18.805756 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:18.805810 kubelet[3343]: E0706 23:47:18.805769 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:18.805904 kubelet[3343]: E0706 23:47:18.805891 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:18.805904 kubelet[3343]: W0706 23:47:18.805900 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:18.805904 kubelet[3343]: E0706 23:47:18.805907 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:18.806232 kubelet[3343]: E0706 23:47:18.806128 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:18.806232 kubelet[3343]: W0706 23:47:18.806138 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:18.806232 kubelet[3343]: E0706 23:47:18.806152 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:18.806361 kubelet[3343]: E0706 23:47:18.806349 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:18.806403 kubelet[3343]: W0706 23:47:18.806394 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:18.806452 kubelet[3343]: E0706 23:47:18.806442 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:18.806630 kubelet[3343]: E0706 23:47:18.806619 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:18.806710 kubelet[3343]: W0706 23:47:18.806698 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:18.806772 kubelet[3343]: E0706 23:47:18.806762 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:18.806972 kubelet[3343]: E0706 23:47:18.806952 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:18.806972 kubelet[3343]: W0706 23:47:18.806967 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:18.807042 kubelet[3343]: E0706 23:47:18.806981 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:18.807231 kubelet[3343]: E0706 23:47:18.807180 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:18.807231 kubelet[3343]: W0706 23:47:18.807190 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:18.807231 kubelet[3343]: E0706 23:47:18.807199 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:18.807464 kubelet[3343]: E0706 23:47:18.807445 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:47:18.807464 kubelet[3343]: W0706 23:47:18.807455 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:47:18.807464 kubelet[3343]: E0706 23:47:18.807463 3343 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:47:19.224821 containerd[1874]: time="2025-07-06T23:47:19.223275496Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:19.225896 containerd[1874]: time="2025-07-06T23:47:19.225852652Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 6 23:47:19.230235 containerd[1874]: time="2025-07-06T23:47:19.230161409Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:19.236834 containerd[1874]: time="2025-07-06T23:47:19.236459361Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:19.237430 containerd[1874]: time="2025-07-06T23:47:19.237313130Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.283072094s" Jul 6 23:47:19.237430 containerd[1874]: time="2025-07-06T23:47:19.237341611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 6 23:47:19.240211 containerd[1874]: time="2025-07-06T23:47:19.240187134Z" level=info msg="CreateContainer within sandbox \"06c9ea9e66a13f449e8baf58843725883f57385df52d12e30a8e8ebc5ef0b238\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 6 23:47:19.265712 containerd[1874]: time="2025-07-06T23:47:19.264979889Z" level=info msg="Container 317da9349bd027e670e474a902420dbaaffd5f69d679b3b375aa72b8ca94fc64: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:47:19.283610 containerd[1874]: time="2025-07-06T23:47:19.283570920Z" level=info msg="CreateContainer within sandbox \"06c9ea9e66a13f449e8baf58843725883f57385df52d12e30a8e8ebc5ef0b238\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"317da9349bd027e670e474a902420dbaaffd5f69d679b3b375aa72b8ca94fc64\"" Jul 6 23:47:19.284279 containerd[1874]: time="2025-07-06T23:47:19.284258156Z" level=info msg="StartContainer for \"317da9349bd027e670e474a902420dbaaffd5f69d679b3b375aa72b8ca94fc64\"" Jul 6 23:47:19.286194 containerd[1874]: time="2025-07-06T23:47:19.286164635Z" level=info msg="connecting to shim 317da9349bd027e670e474a902420dbaaffd5f69d679b3b375aa72b8ca94fc64" address="unix:///run/containerd/s/05a2a4310bd7a05319638424c875d2be8b0bb07e9ce6eb8c1915e1bf5ecb2b9f" protocol=ttrpc version=3 Jul 6 23:47:19.305873 systemd[1]: Started cri-containerd-317da9349bd027e670e474a902420dbaaffd5f69d679b3b375aa72b8ca94fc64.scope - libcontainer container 317da9349bd027e670e474a902420dbaaffd5f69d679b3b375aa72b8ca94fc64. Jul 6 23:47:19.346175 containerd[1874]: time="2025-07-06T23:47:19.346114552Z" level=info msg="StartContainer for \"317da9349bd027e670e474a902420dbaaffd5f69d679b3b375aa72b8ca94fc64\" returns successfully" Jul 6 23:47:19.346552 systemd[1]: cri-containerd-317da9349bd027e670e474a902420dbaaffd5f69d679b3b375aa72b8ca94fc64.scope: Deactivated successfully. Jul 6 23:47:19.350592 containerd[1874]: time="2025-07-06T23:47:19.350559106Z" level=info msg="received exit event container_id:\"317da9349bd027e670e474a902420dbaaffd5f69d679b3b375aa72b8ca94fc64\" id:\"317da9349bd027e670e474a902420dbaaffd5f69d679b3b375aa72b8ca94fc64\" pid:3999 exited_at:{seconds:1751845639 nanos:350244921}" Jul 6 23:47:19.350949 containerd[1874]: time="2025-07-06T23:47:19.350916652Z" level=info msg="TaskExit event in podsandbox handler container_id:\"317da9349bd027e670e474a902420dbaaffd5f69d679b3b375aa72b8ca94fc64\" id:\"317da9349bd027e670e474a902420dbaaffd5f69d679b3b375aa72b8ca94fc64\" pid:3999 exited_at:{seconds:1751845639 nanos:350244921}" Jul 6 23:47:19.375931 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-317da9349bd027e670e474a902420dbaaffd5f69d679b3b375aa72b8ca94fc64-rootfs.mount: Deactivated successfully. Jul 6 23:47:19.646045 kubelet[3343]: E0706 23:47:19.645984 3343 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tnbxw" podUID="4e241bd5-4cc4-4ff9-83ce-48a34c457465" Jul 6 23:47:19.730371 kubelet[3343]: I0706 23:47:19.730319 3343 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:47:19.745534 kubelet[3343]: I0706 23:47:19.745451 3343 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-67d9cb7c66-vdqlt" podStartSLOduration=3.128379209 podStartE2EDuration="4.745336687s" podCreationTimestamp="2025-07-06 23:47:15 +0000 UTC" firstStartedPulling="2025-07-06 23:47:16.336555241 +0000 UTC m=+25.776557233" lastFinishedPulling="2025-07-06 23:47:17.953512719 +0000 UTC m=+27.393514711" observedRunningTime="2025-07-06 23:47:18.740694731 +0000 UTC m=+28.180696771" watchObservedRunningTime="2025-07-06 23:47:19.745336687 +0000 UTC m=+29.185338679" Jul 6 23:47:20.735414 containerd[1874]: time="2025-07-06T23:47:20.735319937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 6 23:47:21.646272 kubelet[3343]: E0706 23:47:21.646223 3343 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tnbxw" podUID="4e241bd5-4cc4-4ff9-83ce-48a34c457465" Jul 6 23:47:21.858661 kubelet[3343]: I0706 23:47:21.858626 3343 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:47:22.905308 containerd[1874]: time="2025-07-06T23:47:22.905249107Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:22.912869 containerd[1874]: time="2025-07-06T23:47:22.912842089Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 6 23:47:22.915683 containerd[1874]: time="2025-07-06T23:47:22.915657003Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:22.920347 containerd[1874]: time="2025-07-06T23:47:22.920308707Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:22.920885 containerd[1874]: time="2025-07-06T23:47:22.920782216Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.185327227s" Jul 6 23:47:22.920885 containerd[1874]: time="2025-07-06T23:47:22.920809689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 6 23:47:22.924172 containerd[1874]: time="2025-07-06T23:47:22.924143466Z" level=info msg="CreateContainer within sandbox \"06c9ea9e66a13f449e8baf58843725883f57385df52d12e30a8e8ebc5ef0b238\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 6 23:47:22.955768 containerd[1874]: time="2025-07-06T23:47:22.955378986Z" level=info msg="Container 80b214d35009ba01bc6e4060e275532aa8bd3023ab8dcd06ccaa132cef61a099: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:47:22.971658 containerd[1874]: time="2025-07-06T23:47:22.971623044Z" level=info msg="CreateContainer within sandbox \"06c9ea9e66a13f449e8baf58843725883f57385df52d12e30a8e8ebc5ef0b238\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"80b214d35009ba01bc6e4060e275532aa8bd3023ab8dcd06ccaa132cef61a099\"" Jul 6 23:47:22.972419 containerd[1874]: time="2025-07-06T23:47:22.972390514Z" level=info msg="StartContainer for \"80b214d35009ba01bc6e4060e275532aa8bd3023ab8dcd06ccaa132cef61a099\"" Jul 6 23:47:22.973650 containerd[1874]: time="2025-07-06T23:47:22.973563244Z" level=info msg="connecting to shim 80b214d35009ba01bc6e4060e275532aa8bd3023ab8dcd06ccaa132cef61a099" address="unix:///run/containerd/s/05a2a4310bd7a05319638424c875d2be8b0bb07e9ce6eb8c1915e1bf5ecb2b9f" protocol=ttrpc version=3 Jul 6 23:47:22.992867 systemd[1]: Started cri-containerd-80b214d35009ba01bc6e4060e275532aa8bd3023ab8dcd06ccaa132cef61a099.scope - libcontainer container 80b214d35009ba01bc6e4060e275532aa8bd3023ab8dcd06ccaa132cef61a099. Jul 6 23:47:23.032098 containerd[1874]: time="2025-07-06T23:47:23.032062359Z" level=info msg="StartContainer for \"80b214d35009ba01bc6e4060e275532aa8bd3023ab8dcd06ccaa132cef61a099\" returns successfully" Jul 6 23:47:23.645927 kubelet[3343]: E0706 23:47:23.645857 3343 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tnbxw" podUID="4e241bd5-4cc4-4ff9-83ce-48a34c457465" Jul 6 23:47:24.112200 containerd[1874]: time="2025-07-06T23:47:24.112064731Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:47:24.114978 systemd[1]: cri-containerd-80b214d35009ba01bc6e4060e275532aa8bd3023ab8dcd06ccaa132cef61a099.scope: Deactivated successfully. Jul 6 23:47:24.115734 systemd[1]: cri-containerd-80b214d35009ba01bc6e4060e275532aa8bd3023ab8dcd06ccaa132cef61a099.scope: Consumed 311ms CPU time, 187.9M memory peak, 165.8M written to disk. Jul 6 23:47:24.117421 containerd[1874]: time="2025-07-06T23:47:24.117239946Z" level=info msg="received exit event container_id:\"80b214d35009ba01bc6e4060e275532aa8bd3023ab8dcd06ccaa132cef61a099\" id:\"80b214d35009ba01bc6e4060e275532aa8bd3023ab8dcd06ccaa132cef61a099\" pid:4062 exited_at:{seconds:1751845644 nanos:116878088}" Jul 6 23:47:24.117535 containerd[1874]: time="2025-07-06T23:47:24.117518082Z" level=info msg="TaskExit event in podsandbox handler container_id:\"80b214d35009ba01bc6e4060e275532aa8bd3023ab8dcd06ccaa132cef61a099\" id:\"80b214d35009ba01bc6e4060e275532aa8bd3023ab8dcd06ccaa132cef61a099\" pid:4062 exited_at:{seconds:1751845644 nanos:116878088}" Jul 6 23:47:24.134373 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80b214d35009ba01bc6e4060e275532aa8bd3023ab8dcd06ccaa132cef61a099-rootfs.mount: Deactivated successfully. Jul 6 23:47:24.164711 kubelet[3343]: I0706 23:47:24.164036 3343 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 6 23:47:24.538029 kubelet[3343]: I0706 23:47:24.234801 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0f26caff-9208-4f4d-8001-680f5c801d5b-whisker-backend-key-pair\") pod \"whisker-7f797566bb-fqlpw\" (UID: \"0f26caff-9208-4f4d-8001-680f5c801d5b\") " pod="calico-system/whisker-7f797566bb-fqlpw" Jul 6 23:47:24.538029 kubelet[3343]: I0706 23:47:24.234832 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chdbj\" (UniqueName: \"kubernetes.io/projected/0f26caff-9208-4f4d-8001-680f5c801d5b-kube-api-access-chdbj\") pod \"whisker-7f797566bb-fqlpw\" (UID: \"0f26caff-9208-4f4d-8001-680f5c801d5b\") " pod="calico-system/whisker-7f797566bb-fqlpw" Jul 6 23:47:24.538029 kubelet[3343]: I0706 23:47:24.234846 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f26caff-9208-4f4d-8001-680f5c801d5b-whisker-ca-bundle\") pod \"whisker-7f797566bb-fqlpw\" (UID: \"0f26caff-9208-4f4d-8001-680f5c801d5b\") " pod="calico-system/whisker-7f797566bb-fqlpw" Jul 6 23:47:24.538029 kubelet[3343]: I0706 23:47:24.335501 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/715ecb67-8a9d-4d9a-94e6-df816364bfe3-goldmane-key-pair\") pod \"goldmane-768f4c5c69-4wg5n\" (UID: \"715ecb67-8a9d-4d9a-94e6-df816364bfe3\") " pod="calico-system/goldmane-768f4c5c69-4wg5n" Jul 6 23:47:24.538029 kubelet[3343]: I0706 23:47:24.335540 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rsmf\" (UniqueName: \"kubernetes.io/projected/011e81b3-5666-4af9-a767-df1f20092cda-kube-api-access-6rsmf\") pod \"calico-apiserver-5cc56c86c5-l2zwh\" (UID: \"011e81b3-5666-4af9-a767-df1f20092cda\") " pod="calico-apiserver/calico-apiserver-5cc56c86c5-l2zwh" Jul 6 23:47:24.215062 systemd[1]: Created slice kubepods-besteffort-pod0f26caff_9208_4f4d_8001_680f5c801d5b.slice - libcontainer container kubepods-besteffort-pod0f26caff_9208_4f4d_8001_680f5c801d5b.slice. Jul 6 23:47:24.538286 kubelet[3343]: I0706 23:47:24.335555 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/715ecb67-8a9d-4d9a-94e6-df816364bfe3-config\") pod \"goldmane-768f4c5c69-4wg5n\" (UID: \"715ecb67-8a9d-4d9a-94e6-df816364bfe3\") " pod="calico-system/goldmane-768f4c5c69-4wg5n" Jul 6 23:47:24.538286 kubelet[3343]: I0706 23:47:24.335569 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9frhz\" (UniqueName: \"kubernetes.io/projected/eb11f795-1482-49f1-9bdd-cb95b27924fb-kube-api-access-9frhz\") pod \"calico-kube-controllers-666c858d6f-x5lwm\" (UID: \"eb11f795-1482-49f1-9bdd-cb95b27924fb\") " pod="calico-system/calico-kube-controllers-666c858d6f-x5lwm" Jul 6 23:47:24.538286 kubelet[3343]: I0706 23:47:24.335584 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/de6570c3-c974-4754-b6ac-36c815cefd46-calico-apiserver-certs\") pod \"calico-apiserver-5cc56c86c5-67km5\" (UID: \"de6570c3-c974-4754-b6ac-36c815cefd46\") " pod="calico-apiserver/calico-apiserver-5cc56c86c5-67km5" Jul 6 23:47:24.538286 kubelet[3343]: I0706 23:47:24.335603 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/715ecb67-8a9d-4d9a-94e6-df816364bfe3-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-4wg5n\" (UID: \"715ecb67-8a9d-4d9a-94e6-df816364bfe3\") " pod="calico-system/goldmane-768f4c5c69-4wg5n" Jul 6 23:47:24.538286 kubelet[3343]: I0706 23:47:24.335614 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pnlw\" (UniqueName: \"kubernetes.io/projected/de6570c3-c974-4754-b6ac-36c815cefd46-kube-api-access-8pnlw\") pod \"calico-apiserver-5cc56c86c5-67km5\" (UID: \"de6570c3-c974-4754-b6ac-36c815cefd46\") " pod="calico-apiserver/calico-apiserver-5cc56c86c5-67km5" Jul 6 23:47:24.226195 systemd[1]: Created slice kubepods-burstable-pod0de0c5ca_daa8_43f4_9a06_b7ef6f8f0c67.slice - libcontainer container kubepods-burstable-pod0de0c5ca_daa8_43f4_9a06_b7ef6f8f0c67.slice. Jul 6 23:47:24.538400 kubelet[3343]: I0706 23:47:24.335623 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/578e673d-a14d-48d0-98ff-91d8b46f26fc-config-volume\") pod \"coredns-668d6bf9bc-ghldx\" (UID: \"578e673d-a14d-48d0-98ff-91d8b46f26fc\") " pod="kube-system/coredns-668d6bf9bc-ghldx" Jul 6 23:47:24.538400 kubelet[3343]: I0706 23:47:24.335636 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v88f\" (UniqueName: \"kubernetes.io/projected/578e673d-a14d-48d0-98ff-91d8b46f26fc-kube-api-access-2v88f\") pod \"coredns-668d6bf9bc-ghldx\" (UID: \"578e673d-a14d-48d0-98ff-91d8b46f26fc\") " pod="kube-system/coredns-668d6bf9bc-ghldx" Jul 6 23:47:24.538400 kubelet[3343]: I0706 23:47:24.335656 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8cz5\" (UniqueName: \"kubernetes.io/projected/715ecb67-8a9d-4d9a-94e6-df816364bfe3-kube-api-access-z8cz5\") pod \"goldmane-768f4c5c69-4wg5n\" (UID: \"715ecb67-8a9d-4d9a-94e6-df816364bfe3\") " pod="calico-system/goldmane-768f4c5c69-4wg5n" Jul 6 23:47:24.538400 kubelet[3343]: I0706 23:47:24.335678 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0de0c5ca-daa8-43f4-9a06-b7ef6f8f0c67-config-volume\") pod \"coredns-668d6bf9bc-8g56m\" (UID: \"0de0c5ca-daa8-43f4-9a06-b7ef6f8f0c67\") " pod="kube-system/coredns-668d6bf9bc-8g56m" Jul 6 23:47:24.538400 kubelet[3343]: I0706 23:47:24.335689 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc6wf\" (UniqueName: \"kubernetes.io/projected/0de0c5ca-daa8-43f4-9a06-b7ef6f8f0c67-kube-api-access-mc6wf\") pod \"coredns-668d6bf9bc-8g56m\" (UID: \"0de0c5ca-daa8-43f4-9a06-b7ef6f8f0c67\") " pod="kube-system/coredns-668d6bf9bc-8g56m" Jul 6 23:47:24.243361 systemd[1]: Created slice kubepods-besteffort-podde6570c3_c974_4754_b6ac_36c815cefd46.slice - libcontainer container kubepods-besteffort-podde6570c3_c974_4754_b6ac_36c815cefd46.slice. Jul 6 23:47:24.538505 kubelet[3343]: I0706 23:47:24.335702 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb11f795-1482-49f1-9bdd-cb95b27924fb-tigera-ca-bundle\") pod \"calico-kube-controllers-666c858d6f-x5lwm\" (UID: \"eb11f795-1482-49f1-9bdd-cb95b27924fb\") " pod="calico-system/calico-kube-controllers-666c858d6f-x5lwm" Jul 6 23:47:24.538505 kubelet[3343]: I0706 23:47:24.335715 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/011e81b3-5666-4af9-a767-df1f20092cda-calico-apiserver-certs\") pod \"calico-apiserver-5cc56c86c5-l2zwh\" (UID: \"011e81b3-5666-4af9-a767-df1f20092cda\") " pod="calico-apiserver/calico-apiserver-5cc56c86c5-l2zwh" Jul 6 23:47:24.252489 systemd[1]: Created slice kubepods-besteffort-pod715ecb67_8a9d_4d9a_94e6_df816364bfe3.slice - libcontainer container kubepods-besteffort-pod715ecb67_8a9d_4d9a_94e6_df816364bfe3.slice. Jul 6 23:47:24.258243 systemd[1]: Created slice kubepods-besteffort-podeb11f795_1482_49f1_9bdd_cb95b27924fb.slice - libcontainer container kubepods-besteffort-podeb11f795_1482_49f1_9bdd_cb95b27924fb.slice. Jul 6 23:47:24.262571 systemd[1]: Created slice kubepods-burstable-pod578e673d_a14d_48d0_98ff_91d8b46f26fc.slice - libcontainer container kubepods-burstable-pod578e673d_a14d_48d0_98ff_91d8b46f26fc.slice. Jul 6 23:47:24.270195 systemd[1]: Created slice kubepods-besteffort-pod011e81b3_5666_4af9_a767_df1f20092cda.slice - libcontainer container kubepods-besteffort-pod011e81b3_5666_4af9_a767_df1f20092cda.slice. Jul 6 23:47:24.839479 containerd[1874]: time="2025-07-06T23:47:24.839073320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f797566bb-fqlpw,Uid:0f26caff-9208-4f4d-8001-680f5c801d5b,Namespace:calico-system,Attempt:0,}" Jul 6 23:47:24.855048 containerd[1874]: time="2025-07-06T23:47:24.854925847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-4wg5n,Uid:715ecb67-8a9d-4d9a-94e6-df816364bfe3,Namespace:calico-system,Attempt:0,}" Jul 6 23:47:24.855208 containerd[1874]: time="2025-07-06T23:47:24.855189959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-666c858d6f-x5lwm,Uid:eb11f795-1482-49f1-9bdd-cb95b27924fb,Namespace:calico-system,Attempt:0,}" Jul 6 23:47:24.864430 containerd[1874]: time="2025-07-06T23:47:24.864315635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8g56m,Uid:0de0c5ca-daa8-43f4-9a06-b7ef6f8f0c67,Namespace:kube-system,Attempt:0,}" Jul 6 23:47:24.866091 containerd[1874]: time="2025-07-06T23:47:24.866004590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cc56c86c5-l2zwh,Uid:011e81b3-5666-4af9-a767-df1f20092cda,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:47:24.867532 containerd[1874]: time="2025-07-06T23:47:24.867470858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cc56c86c5-67km5,Uid:de6570c3-c974-4754-b6ac-36c815cefd46,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:47:24.867919 containerd[1874]: time="2025-07-06T23:47:24.867710369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ghldx,Uid:578e673d-a14d-48d0-98ff-91d8b46f26fc,Namespace:kube-system,Attempt:0,}" Jul 6 23:47:25.086682 containerd[1874]: time="2025-07-06T23:47:25.086429548Z" level=error msg="Failed to destroy network for sandbox \"bcbce1cbd522de5b0067f085ebf8da273269bfd1f4f42ce9c994f09468ee9730\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:47:25.107752 containerd[1874]: time="2025-07-06T23:47:25.107306402Z" level=error msg="Failed to destroy network for sandbox \"dfd5676b5df49a70f3d5ea191e8f362471a444db55f779dd9edf76a19a25b82b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:47:25.125977 containerd[1874]: time="2025-07-06T23:47:25.125932580Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f797566bb-fqlpw,Uid:0f26caff-9208-4f4d-8001-680f5c801d5b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcbce1cbd522de5b0067f085ebf8da273269bfd1f4f42ce9c994f09468ee9730\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:47:25.130605 kubelet[3343]: E0706 23:47:25.130556 3343 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcbce1cbd522de5b0067f085ebf8da273269bfd1f4f42ce9c994f09468ee9730\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:47:25.131654 kubelet[3343]: E0706 23:47:25.130630 3343 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcbce1cbd522de5b0067f085ebf8da273269bfd1f4f42ce9c994f09468ee9730\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f797566bb-fqlpw" Jul 6 23:47:25.131654 kubelet[3343]: E0706 23:47:25.130647 3343 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcbce1cbd522de5b0067f085ebf8da273269bfd1f4f42ce9c994f09468ee9730\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f797566bb-fqlpw" Jul 6 23:47:25.131654 kubelet[3343]: E0706 23:47:25.130686 3343 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7f797566bb-fqlpw_calico-system(0f26caff-9208-4f4d-8001-680f5c801d5b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7f797566bb-fqlpw_calico-system(0f26caff-9208-4f4d-8001-680f5c801d5b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bcbce1cbd522de5b0067f085ebf8da273269bfd1f4f42ce9c994f09468ee9730\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7f797566bb-fqlpw" podUID="0f26caff-9208-4f4d-8001-680f5c801d5b" Jul 6 23:47:25.135302 containerd[1874]: time="2025-07-06T23:47:25.135247886Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-4wg5n,Uid:715ecb67-8a9d-4d9a-94e6-df816364bfe3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfd5676b5df49a70f3d5ea191e8f362471a444db55f779dd9edf76a19a25b82b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:47:25.135536 kubelet[3343]: E0706 23:47:25.135516 3343 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfd5676b5df49a70f3d5ea191e8f362471a444db55f779dd9edf76a19a25b82b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:47:25.136074 kubelet[3343]: E0706 23:47:25.136053 3343 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfd5676b5df49a70f3d5ea191e8f362471a444db55f779dd9edf76a19a25b82b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-4wg5n" Jul 6 23:47:25.136168 kubelet[3343]: E0706 23:47:25.136155 3343 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfd5676b5df49a70f3d5ea191e8f362471a444db55f779dd9edf76a19a25b82b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-4wg5n" Jul 6 23:47:25.136268 kubelet[3343]: E0706 23:47:25.136240 3343 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-4wg5n_calico-system(715ecb67-8a9d-4d9a-94e6-df816364bfe3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-4wg5n_calico-system(715ecb67-8a9d-4d9a-94e6-df816364bfe3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dfd5676b5df49a70f3d5ea191e8f362471a444db55f779dd9edf76a19a25b82b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-4wg5n" podUID="715ecb67-8a9d-4d9a-94e6-df816364bfe3" Jul 6 23:47:25.159614 containerd[1874]: time="2025-07-06T23:47:25.159538683Z" level=error msg="Failed to destroy network for sandbox \"b6314e8e431896a7cfc40814f262ef4c6c8fa965b743238a03c904d4208b9619\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:47:25.161189 systemd[1]: run-netns-cni\x2d5e6fe1ab\x2ddb66\x2df6a4\x2d9d04\x2db37703982a43.mount: Deactivated successfully. Jul 6 23:47:25.166577 containerd[1874]: time="2025-07-06T23:47:25.166468804Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-666c858d6f-x5lwm,Uid:eb11f795-1482-49f1-9bdd-cb95b27924fb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6314e8e431896a7cfc40814f262ef4c6c8fa965b743238a03c904d4208b9619\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:47:25.167091 kubelet[3343]: E0706 23:47:25.166720 3343 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6314e8e431896a7cfc40814f262ef4c6c8fa965b743238a03c904d4208b9619\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:47:25.167091 kubelet[3343]: E0706 23:47:25.166792 3343 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6314e8e431896a7cfc40814f262ef4c6c8fa965b743238a03c904d4208b9619\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-666c858d6f-x5lwm" Jul 6 23:47:25.167091 kubelet[3343]: E0706 23:47:25.166809 3343 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6314e8e431896a7cfc40814f262ef4c6c8fa965b743238a03c904d4208b9619\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-666c858d6f-x5lwm" Jul 6 23:47:25.167947 kubelet[3343]: E0706 23:47:25.166852 3343 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-666c858d6f-x5lwm_calico-system(eb11f795-1482-49f1-9bdd-cb95b27924fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-666c858d6f-x5lwm_calico-system(eb11f795-1482-49f1-9bdd-cb95b27924fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6314e8e431896a7cfc40814f262ef4c6c8fa965b743238a03c904d4208b9619\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-666c858d6f-x5lwm" podUID="eb11f795-1482-49f1-9bdd-cb95b27924fb" Jul 6 23:47:25.182664 containerd[1874]: time="2025-07-06T23:47:25.182602859Z" level=error msg="Failed to destroy network for sandbox \"11d8d67c5d60c3e1b73d6257473c09d0eb6f19753c23924261c628e89bac4e7b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:47:25.185098 systemd[1]: run-netns-cni\x2db0af21f7\x2dbf9b\x2d8b9e\x2d86e5\x2df3fdbb1202f5.mount: Deactivated successfully. Jul 6 23:47:25.191089 containerd[1874]: time="2025-07-06T23:47:25.190962192Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8g56m,Uid:0de0c5ca-daa8-43f4-9a06-b7ef6f8f0c67,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"11d8d67c5d60c3e1b73d6257473c09d0eb6f19753c23924261c628e89bac4e7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:47:25.191484 kubelet[3343]: E0706 23:47:25.191237 3343 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11d8d67c5d60c3e1b73d6257473c09d0eb6f19753c23924261c628e89bac4e7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:47:25.191484 kubelet[3343]: E0706 23:47:25.191290 3343 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11d8d67c5d60c3e1b73d6257473c09d0eb6f19753c23924261c628e89bac4e7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8g56m" Jul 6 23:47:25.191484 kubelet[3343]: E0706 23:47:25.191306 3343 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11d8d67c5d60c3e1b73d6257473c09d0eb6f19753c23924261c628e89bac4e7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8g56m" Jul 6 23:47:25.192809 kubelet[3343]: E0706 23:47:25.191350 3343 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-8g56m_kube-system(0de0c5ca-daa8-43f4-9a06-b7ef6f8f0c67)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-8g56m_kube-system(0de0c5ca-daa8-43f4-9a06-b7ef6f8f0c67)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"11d8d67c5d60c3e1b73d6257473c09d0eb6f19753c23924261c628e89bac4e7b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-8g56m" podUID="0de0c5ca-daa8-43f4-9a06-b7ef6f8f0c67" Jul 6 23:47:25.196397 containerd[1874]: time="2025-07-06T23:47:25.196371107Z" level=error msg="Failed to destroy network for sandbox \"5cda990026ccee9e69e987a32e4b53c54f3673d9d00b3c1fadd4fbe6b309d33f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:47:25.197995 systemd[1]: run-netns-cni\x2d8f15a028\x2dd7ed\x2dfea3\x2d0600\x2d8cee34ab2844.mount: Deactivated successfully. Jul 6 23:47:25.201083 containerd[1874]: time="2025-07-06T23:47:25.201042136Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ghldx,Uid:578e673d-a14d-48d0-98ff-91d8b46f26fc,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cda990026ccee9e69e987a32e4b53c54f3673d9d00b3c1fadd4fbe6b309d33f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:47:25.201624 kubelet[3343]: E0706 23:47:25.201352 3343 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cda990026ccee9e69e987a32e4b53c54f3673d9d00b3c1fadd4fbe6b309d33f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:47:25.201624 kubelet[3343]: E0706 23:47:25.201399 3343 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cda990026ccee9e69e987a32e4b53c54f3673d9d00b3c1fadd4fbe6b309d33f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ghldx" Jul 6 23:47:25.201624 kubelet[3343]: E0706 23:47:25.201412 3343 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cda990026ccee9e69e987a32e4b53c54f3673d9d00b3c1fadd4fbe6b309d33f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ghldx" Jul 6 23:47:25.202518 kubelet[3343]: E0706 23:47:25.201450 3343 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-ghldx_kube-system(578e673d-a14d-48d0-98ff-91d8b46f26fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-ghldx_kube-system(578e673d-a14d-48d0-98ff-91d8b46f26fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5cda990026ccee9e69e987a32e4b53c54f3673d9d00b3c1fadd4fbe6b309d33f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ghldx" podUID="578e673d-a14d-48d0-98ff-91d8b46f26fc" Jul 6 23:47:25.207277 containerd[1874]: time="2025-07-06T23:47:25.207200474Z" level=error msg="Failed to destroy network for sandbox \"50b4bf4cc3b2114bcfc462ef4868e792b229cc0325156baffb7d1318be798d91\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:47:25.208703 systemd[1]: run-netns-cni\x2df1bdf51a\x2d4fbe\x2d05e5\x2d5260\x2d3b1b3598ae37.mount: Deactivated successfully. Jul 6 23:47:25.211604 containerd[1874]: time="2025-07-06T23:47:25.211544741Z" level=error msg="Failed to destroy network for sandbox \"49af19b571cdb45a3b53d884a3422b9e0240d8c86cdfd8603fb623b4494e468d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:47:25.213750 containerd[1874]: time="2025-07-06T23:47:25.213633340Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cc56c86c5-l2zwh,Uid:011e81b3-5666-4af9-a767-df1f20092cda,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"50b4bf4cc3b2114bcfc462ef4868e792b229cc0325156baffb7d1318be798d91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:47:25.214549 kubelet[3343]: E0706 23:47:25.214526 3343 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50b4bf4cc3b2114bcfc462ef4868e792b229cc0325156baffb7d1318be798d91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:47:25.215080 kubelet[3343]: E0706 23:47:25.214780 3343 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50b4bf4cc3b2114bcfc462ef4868e792b229cc0325156baffb7d1318be798d91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cc56c86c5-l2zwh" Jul 6 23:47:25.215080 kubelet[3343]: E0706 23:47:25.214833 3343 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50b4bf4cc3b2114bcfc462ef4868e792b229cc0325156baffb7d1318be798d91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cc56c86c5-l2zwh" Jul 6 23:47:25.215921 kubelet[3343]: E0706 23:47:25.215868 3343 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cc56c86c5-l2zwh_calico-apiserver(011e81b3-5666-4af9-a767-df1f20092cda)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cc56c86c5-l2zwh_calico-apiserver(011e81b3-5666-4af9-a767-df1f20092cda)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"50b4bf4cc3b2114bcfc462ef4868e792b229cc0325156baffb7d1318be798d91\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cc56c86c5-l2zwh" podUID="011e81b3-5666-4af9-a767-df1f20092cda" Jul 6 23:47:25.220837 containerd[1874]: time="2025-07-06T23:47:25.220787236Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cc56c86c5-67km5,Uid:de6570c3-c974-4754-b6ac-36c815cefd46,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"49af19b571cdb45a3b53d884a3422b9e0240d8c86cdfd8603fb623b4494e468d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:47:25.221129 kubelet[3343]: E0706 23:47:25.221038 3343 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49af19b571cdb45a3b53d884a3422b9e0240d8c86cdfd8603fb623b4494e468d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:47:25.221129 kubelet[3343]: E0706 23:47:25.221098 3343 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49af19b571cdb45a3b53d884a3422b9e0240d8c86cdfd8603fb623b4494e468d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cc56c86c5-67km5" Jul 6 23:47:25.221129 kubelet[3343]: E0706 23:47:25.221112 3343 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49af19b571cdb45a3b53d884a3422b9e0240d8c86cdfd8603fb623b4494e468d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cc56c86c5-67km5" Jul 6 23:47:25.221308 kubelet[3343]: E0706 23:47:25.221250 3343 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cc56c86c5-67km5_calico-apiserver(de6570c3-c974-4754-b6ac-36c815cefd46)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cc56c86c5-67km5_calico-apiserver(de6570c3-c974-4754-b6ac-36c815cefd46)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"49af19b571cdb45a3b53d884a3422b9e0240d8c86cdfd8603fb623b4494e468d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cc56c86c5-67km5" podUID="de6570c3-c974-4754-b6ac-36c815cefd46" Jul 6 23:47:25.650753 systemd[1]: Created slice kubepods-besteffort-pod4e241bd5_4cc4_4ff9_83ce_48a34c457465.slice - libcontainer container kubepods-besteffort-pod4e241bd5_4cc4_4ff9_83ce_48a34c457465.slice. Jul 6 23:47:25.653583 containerd[1874]: time="2025-07-06T23:47:25.653511076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tnbxw,Uid:4e241bd5-4cc4-4ff9-83ce-48a34c457465,Namespace:calico-system,Attempt:0,}" Jul 6 23:47:25.692574 containerd[1874]: time="2025-07-06T23:47:25.692494037Z" level=error msg="Failed to destroy network for sandbox \"60ade88d794e9f362cde620adeeb1779d82a30c93ace366fc19f8c7913d1cf3a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:47:25.701365 containerd[1874]: time="2025-07-06T23:47:25.701283302Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tnbxw,Uid:4e241bd5-4cc4-4ff9-83ce-48a34c457465,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"60ade88d794e9f362cde620adeeb1779d82a30c93ace366fc19f8c7913d1cf3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:47:25.701712 kubelet[3343]: E0706 23:47:25.701595 3343 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60ade88d794e9f362cde620adeeb1779d82a30c93ace366fc19f8c7913d1cf3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:47:25.701712 kubelet[3343]: E0706 23:47:25.701665 3343 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60ade88d794e9f362cde620adeeb1779d82a30c93ace366fc19f8c7913d1cf3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tnbxw" Jul 6 23:47:25.701712 kubelet[3343]: E0706 23:47:25.701681 3343 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60ade88d794e9f362cde620adeeb1779d82a30c93ace366fc19f8c7913d1cf3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tnbxw" Jul 6 23:47:25.701941 kubelet[3343]: E0706 23:47:25.701901 3343 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tnbxw_calico-system(4e241bd5-4cc4-4ff9-83ce-48a34c457465)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tnbxw_calico-system(4e241bd5-4cc4-4ff9-83ce-48a34c457465)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"60ade88d794e9f362cde620adeeb1779d82a30c93ace366fc19f8c7913d1cf3a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tnbxw" podUID="4e241bd5-4cc4-4ff9-83ce-48a34c457465" Jul 6 23:47:25.752516 containerd[1874]: time="2025-07-06T23:47:25.752453423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 6 23:47:26.132426 systemd[1]: run-netns-cni\x2de4b3987b\x2d391f\x2d78fe\x2d62e4\x2d6ed6b518bb64.mount: Deactivated successfully. Jul 6 23:47:30.759880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1647379547.mount: Deactivated successfully. Jul 6 23:47:31.369551 containerd[1874]: time="2025-07-06T23:47:31.369048858Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:31.372718 containerd[1874]: time="2025-07-06T23:47:31.372691792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 6 23:47:31.378821 containerd[1874]: time="2025-07-06T23:47:31.378781664Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:31.392350 containerd[1874]: time="2025-07-06T23:47:31.392140403Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:31.393764 containerd[1874]: time="2025-07-06T23:47:31.393677362Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 5.641191995s" Jul 6 23:47:31.393764 containerd[1874]: time="2025-07-06T23:47:31.393708635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 6 23:47:31.409911 containerd[1874]: time="2025-07-06T23:47:31.409022505Z" level=info msg="CreateContainer within sandbox \"06c9ea9e66a13f449e8baf58843725883f57385df52d12e30a8e8ebc5ef0b238\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 6 23:47:31.465192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2854769552.mount: Deactivated successfully. Jul 6 23:47:31.465553 containerd[1874]: time="2025-07-06T23:47:31.465520483Z" level=info msg="Container 386fa5f7593ec7180e97afe1097097b99e65fbac947b252f4b320036b55bc514: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:47:31.483531 containerd[1874]: time="2025-07-06T23:47:31.483488537Z" level=info msg="CreateContainer within sandbox \"06c9ea9e66a13f449e8baf58843725883f57385df52d12e30a8e8ebc5ef0b238\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"386fa5f7593ec7180e97afe1097097b99e65fbac947b252f4b320036b55bc514\"" Jul 6 23:47:31.484255 containerd[1874]: time="2025-07-06T23:47:31.484231080Z" level=info msg="StartContainer for \"386fa5f7593ec7180e97afe1097097b99e65fbac947b252f4b320036b55bc514\"" Jul 6 23:47:31.485428 containerd[1874]: time="2025-07-06T23:47:31.485399811Z" level=info msg="connecting to shim 386fa5f7593ec7180e97afe1097097b99e65fbac947b252f4b320036b55bc514" address="unix:///run/containerd/s/05a2a4310bd7a05319638424c875d2be8b0bb07e9ce6eb8c1915e1bf5ecb2b9f" protocol=ttrpc version=3 Jul 6 23:47:31.499878 systemd[1]: Started cri-containerd-386fa5f7593ec7180e97afe1097097b99e65fbac947b252f4b320036b55bc514.scope - libcontainer container 386fa5f7593ec7180e97afe1097097b99e65fbac947b252f4b320036b55bc514. Jul 6 23:47:31.537416 containerd[1874]: time="2025-07-06T23:47:31.537352891Z" level=info msg="StartContainer for \"386fa5f7593ec7180e97afe1097097b99e65fbac947b252f4b320036b55bc514\" returns successfully" Jul 6 23:47:31.809992 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 6 23:47:31.810127 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 6 23:47:31.870555 containerd[1874]: time="2025-07-06T23:47:31.870443259Z" level=info msg="TaskExit event in podsandbox handler container_id:\"386fa5f7593ec7180e97afe1097097b99e65fbac947b252f4b320036b55bc514\" id:\"f7114a68e08f8fdaa2d3c7af2be380197d280c64edd2cf5b595351f6aba9a3c7\" pid:4397 exit_status:1 exited_at:{seconds:1751845651 nanos:869805208}" Jul 6 23:47:31.924644 kubelet[3343]: I0706 23:47:31.924582 3343 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-trht4" podStartSLOduration=1.908757727 podStartE2EDuration="16.924563373s" podCreationTimestamp="2025-07-06 23:47:15 +0000 UTC" firstStartedPulling="2025-07-06 23:47:16.378661524 +0000 UTC m=+25.818663516" lastFinishedPulling="2025-07-06 23:47:31.394467162 +0000 UTC m=+40.834469162" observedRunningTime="2025-07-06 23:47:31.797438791 +0000 UTC m=+41.237440815" watchObservedRunningTime="2025-07-06 23:47:31.924563373 +0000 UTC m=+41.364565373" Jul 6 23:47:31.982064 kubelet[3343]: I0706 23:47:31.982018 3343 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f26caff-9208-4f4d-8001-680f5c801d5b-whisker-ca-bundle\") pod \"0f26caff-9208-4f4d-8001-680f5c801d5b\" (UID: \"0f26caff-9208-4f4d-8001-680f5c801d5b\") " Jul 6 23:47:31.982064 kubelet[3343]: I0706 23:47:31.982118 3343 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0f26caff-9208-4f4d-8001-680f5c801d5b-whisker-backend-key-pair\") pod \"0f26caff-9208-4f4d-8001-680f5c801d5b\" (UID: \"0f26caff-9208-4f4d-8001-680f5c801d5b\") " Jul 6 23:47:31.982064 kubelet[3343]: I0706 23:47:31.982138 3343 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chdbj\" (UniqueName: \"kubernetes.io/projected/0f26caff-9208-4f4d-8001-680f5c801d5b-kube-api-access-chdbj\") pod \"0f26caff-9208-4f4d-8001-680f5c801d5b\" (UID: \"0f26caff-9208-4f4d-8001-680f5c801d5b\") " Jul 6 23:47:31.982497 kubelet[3343]: I0706 23:47:31.982384 3343 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f26caff-9208-4f4d-8001-680f5c801d5b-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "0f26caff-9208-4f4d-8001-680f5c801d5b" (UID: "0f26caff-9208-4f4d-8001-680f5c801d5b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 6 23:47:31.987509 systemd[1]: var-lib-kubelet-pods-0f26caff\x2d9208\x2d4f4d\x2d8001\x2d680f5c801d5b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dchdbj.mount: Deactivated successfully. Jul 6 23:47:31.987858 kubelet[3343]: I0706 23:47:31.987541 3343 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f26caff-9208-4f4d-8001-680f5c801d5b-kube-api-access-chdbj" (OuterVolumeSpecName: "kube-api-access-chdbj") pod "0f26caff-9208-4f4d-8001-680f5c801d5b" (UID: "0f26caff-9208-4f4d-8001-680f5c801d5b"). InnerVolumeSpecName "kube-api-access-chdbj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:47:31.990454 kubelet[3343]: I0706 23:47:31.988880 3343 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f26caff-9208-4f4d-8001-680f5c801d5b-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "0f26caff-9208-4f4d-8001-680f5c801d5b" (UID: "0f26caff-9208-4f4d-8001-680f5c801d5b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 6 23:47:31.991013 systemd[1]: var-lib-kubelet-pods-0f26caff\x2d9208\x2d4f4d\x2d8001\x2d680f5c801d5b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 6 23:47:32.083704 kubelet[3343]: I0706 23:47:32.083586 3343 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-chdbj\" (UniqueName: \"kubernetes.io/projected/0f26caff-9208-4f4d-8001-680f5c801d5b-kube-api-access-chdbj\") on node \"ci-4344.1.1-a-ba147b1783\" DevicePath \"\"" Jul 6 23:47:32.083704 kubelet[3343]: I0706 23:47:32.083622 3343 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f26caff-9208-4f4d-8001-680f5c801d5b-whisker-ca-bundle\") on node \"ci-4344.1.1-a-ba147b1783\" DevicePath \"\"" Jul 6 23:47:32.083704 kubelet[3343]: I0706 23:47:32.083631 3343 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0f26caff-9208-4f4d-8001-680f5c801d5b-whisker-backend-key-pair\") on node \"ci-4344.1.1-a-ba147b1783\" DevicePath \"\"" Jul 6 23:47:32.652567 systemd[1]: Removed slice kubepods-besteffort-pod0f26caff_9208_4f4d_8001_680f5c801d5b.slice - libcontainer container kubepods-besteffort-pod0f26caff_9208_4f4d_8001_680f5c801d5b.slice. Jul 6 23:47:32.859451 systemd[1]: Created slice kubepods-besteffort-poddc83cb61_d98f_4807_9bdc_42b0548a1175.slice - libcontainer container kubepods-besteffort-poddc83cb61_d98f_4807_9bdc_42b0548a1175.slice. Jul 6 23:47:32.860613 containerd[1874]: time="2025-07-06T23:47:32.860296046Z" level=info msg="TaskExit event in podsandbox handler container_id:\"386fa5f7593ec7180e97afe1097097b99e65fbac947b252f4b320036b55bc514\" id:\"5c7a042501849fb657e33cff8457e7569dc0b8dad54b288d0c9861646a0ef4c9\" pid:4440 exit_status:1 exited_at:{seconds:1751845652 nanos:859816217}" Jul 6 23:47:32.889763 kubelet[3343]: I0706 23:47:32.889607 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc83cb61-d98f-4807-9bdc-42b0548a1175-whisker-ca-bundle\") pod \"whisker-6db8f88dfd-bhsrz\" (UID: \"dc83cb61-d98f-4807-9bdc-42b0548a1175\") " pod="calico-system/whisker-6db8f88dfd-bhsrz" Jul 6 23:47:32.889763 kubelet[3343]: I0706 23:47:32.889648 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dc83cb61-d98f-4807-9bdc-42b0548a1175-whisker-backend-key-pair\") pod \"whisker-6db8f88dfd-bhsrz\" (UID: \"dc83cb61-d98f-4807-9bdc-42b0548a1175\") " pod="calico-system/whisker-6db8f88dfd-bhsrz" Jul 6 23:47:32.889763 kubelet[3343]: I0706 23:47:32.889662 3343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcvq8\" (UniqueName: \"kubernetes.io/projected/dc83cb61-d98f-4807-9bdc-42b0548a1175-kube-api-access-kcvq8\") pod \"whisker-6db8f88dfd-bhsrz\" (UID: \"dc83cb61-d98f-4807-9bdc-42b0548a1175\") " pod="calico-system/whisker-6db8f88dfd-bhsrz" Jul 6 23:47:33.168656 containerd[1874]: time="2025-07-06T23:47:33.168619126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6db8f88dfd-bhsrz,Uid:dc83cb61-d98f-4807-9bdc-42b0548a1175,Namespace:calico-system,Attempt:0,}" Jul 6 23:47:33.359659 systemd-networkd[1614]: cali93949602eb2: Link UP Jul 6 23:47:33.360224 systemd-networkd[1614]: cali93949602eb2: Gained carrier Jul 6 23:47:33.394565 containerd[1874]: 2025-07-06 23:47:33.214 [INFO][4540] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:47:33.394565 containerd[1874]: 2025-07-06 23:47:33.250 [INFO][4540] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--a--ba147b1783-k8s-whisker--6db8f88dfd--bhsrz-eth0 whisker-6db8f88dfd- calico-system dc83cb61-d98f-4807-9bdc-42b0548a1175 910 0 2025-07-06 23:47:32 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6db8f88dfd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4344.1.1-a-ba147b1783 whisker-6db8f88dfd-bhsrz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali93949602eb2 [] [] }} ContainerID="f5e8c5cc491b8df18390011e63a615fb9184d65d1725d5fb76f58322b29724a4" Namespace="calico-system" Pod="whisker-6db8f88dfd-bhsrz" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-whisker--6db8f88dfd--bhsrz-" Jul 6 23:47:33.394565 containerd[1874]: 2025-07-06 23:47:33.250 [INFO][4540] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f5e8c5cc491b8df18390011e63a615fb9184d65d1725d5fb76f58322b29724a4" Namespace="calico-system" Pod="whisker-6db8f88dfd-bhsrz" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-whisker--6db8f88dfd--bhsrz-eth0" Jul 6 23:47:33.394565 containerd[1874]: 2025-07-06 23:47:33.289 [INFO][4558] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f5e8c5cc491b8df18390011e63a615fb9184d65d1725d5fb76f58322b29724a4" HandleID="k8s-pod-network.f5e8c5cc491b8df18390011e63a615fb9184d65d1725d5fb76f58322b29724a4" Workload="ci--4344.1.1--a--ba147b1783-k8s-whisker--6db8f88dfd--bhsrz-eth0" Jul 6 23:47:33.394772 containerd[1874]: 2025-07-06 23:47:33.290 [INFO][4558] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f5e8c5cc491b8df18390011e63a615fb9184d65d1725d5fb76f58322b29724a4" HandleID="k8s-pod-network.f5e8c5cc491b8df18390011e63a615fb9184d65d1725d5fb76f58322b29724a4" Workload="ci--4344.1.1--a--ba147b1783-k8s-whisker--6db8f88dfd--bhsrz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3600), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.1-a-ba147b1783", "pod":"whisker-6db8f88dfd-bhsrz", "timestamp":"2025-07-06 23:47:33.288997584 +0000 UTC"}, Hostname:"ci-4344.1.1-a-ba147b1783", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:47:33.394772 containerd[1874]: 2025-07-06 23:47:33.290 [INFO][4558] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:47:33.394772 containerd[1874]: 2025-07-06 23:47:33.290 [INFO][4558] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:47:33.394772 containerd[1874]: 2025-07-06 23:47:33.290 [INFO][4558] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-a-ba147b1783' Jul 6 23:47:33.394772 containerd[1874]: 2025-07-06 23:47:33.297 [INFO][4558] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f5e8c5cc491b8df18390011e63a615fb9184d65d1725d5fb76f58322b29724a4" host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:33.394772 containerd[1874]: 2025-07-06 23:47:33.301 [INFO][4558] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:33.394772 containerd[1874]: 2025-07-06 23:47:33.306 [INFO][4558] ipam/ipam.go 511: Trying affinity for 192.168.39.0/26 host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:33.394772 containerd[1874]: 2025-07-06 23:47:33.311 [INFO][4558] ipam/ipam.go 158: Attempting to load block cidr=192.168.39.0/26 host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:33.394772 containerd[1874]: 2025-07-06 23:47:33.313 [INFO][4558] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.39.0/26 host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:33.395675 containerd[1874]: 2025-07-06 23:47:33.313 [INFO][4558] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.39.0/26 handle="k8s-pod-network.f5e8c5cc491b8df18390011e63a615fb9184d65d1725d5fb76f58322b29724a4" host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:33.395675 containerd[1874]: 2025-07-06 23:47:33.316 [INFO][4558] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f5e8c5cc491b8df18390011e63a615fb9184d65d1725d5fb76f58322b29724a4 Jul 6 23:47:33.395675 containerd[1874]: 2025-07-06 23:47:33.328 [INFO][4558] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.39.0/26 handle="k8s-pod-network.f5e8c5cc491b8df18390011e63a615fb9184d65d1725d5fb76f58322b29724a4" host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:33.395675 containerd[1874]: 2025-07-06 23:47:33.334 [INFO][4558] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.39.1/26] block=192.168.39.0/26 handle="k8s-pod-network.f5e8c5cc491b8df18390011e63a615fb9184d65d1725d5fb76f58322b29724a4" host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:33.395675 containerd[1874]: 2025-07-06 23:47:33.334 [INFO][4558] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.39.1/26] handle="k8s-pod-network.f5e8c5cc491b8df18390011e63a615fb9184d65d1725d5fb76f58322b29724a4" host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:33.395675 containerd[1874]: 2025-07-06 23:47:33.334 [INFO][4558] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:47:33.395675 containerd[1874]: 2025-07-06 23:47:33.334 [INFO][4558] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.39.1/26] IPv6=[] ContainerID="f5e8c5cc491b8df18390011e63a615fb9184d65d1725d5fb76f58322b29724a4" HandleID="k8s-pod-network.f5e8c5cc491b8df18390011e63a615fb9184d65d1725d5fb76f58322b29724a4" Workload="ci--4344.1.1--a--ba147b1783-k8s-whisker--6db8f88dfd--bhsrz-eth0" Jul 6 23:47:33.396238 containerd[1874]: 2025-07-06 23:47:33.338 [INFO][4540] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f5e8c5cc491b8df18390011e63a615fb9184d65d1725d5fb76f58322b29724a4" Namespace="calico-system" Pod="whisker-6db8f88dfd-bhsrz" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-whisker--6db8f88dfd--bhsrz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--ba147b1783-k8s-whisker--6db8f88dfd--bhsrz-eth0", GenerateName:"whisker-6db8f88dfd-", Namespace:"calico-system", SelfLink:"", UID:"dc83cb61-d98f-4807-9bdc-42b0548a1175", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 47, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6db8f88dfd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-ba147b1783", ContainerID:"", Pod:"whisker-6db8f88dfd-bhsrz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.39.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali93949602eb2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:47:33.396238 containerd[1874]: 2025-07-06 23:47:33.338 [INFO][4540] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.39.1/32] ContainerID="f5e8c5cc491b8df18390011e63a615fb9184d65d1725d5fb76f58322b29724a4" Namespace="calico-system" Pod="whisker-6db8f88dfd-bhsrz" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-whisker--6db8f88dfd--bhsrz-eth0" Jul 6 23:47:33.396543 containerd[1874]: 2025-07-06 23:47:33.338 [INFO][4540] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali93949602eb2 ContainerID="f5e8c5cc491b8df18390011e63a615fb9184d65d1725d5fb76f58322b29724a4" Namespace="calico-system" Pod="whisker-6db8f88dfd-bhsrz" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-whisker--6db8f88dfd--bhsrz-eth0" Jul 6 23:47:33.396543 containerd[1874]: 2025-07-06 23:47:33.361 [INFO][4540] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f5e8c5cc491b8df18390011e63a615fb9184d65d1725d5fb76f58322b29724a4" Namespace="calico-system" Pod="whisker-6db8f88dfd-bhsrz" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-whisker--6db8f88dfd--bhsrz-eth0" Jul 6 23:47:33.396579 containerd[1874]: 2025-07-06 23:47:33.362 [INFO][4540] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f5e8c5cc491b8df18390011e63a615fb9184d65d1725d5fb76f58322b29724a4" Namespace="calico-system" Pod="whisker-6db8f88dfd-bhsrz" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-whisker--6db8f88dfd--bhsrz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--ba147b1783-k8s-whisker--6db8f88dfd--bhsrz-eth0", GenerateName:"whisker-6db8f88dfd-", Namespace:"calico-system", SelfLink:"", UID:"dc83cb61-d98f-4807-9bdc-42b0548a1175", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 47, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6db8f88dfd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-ba147b1783", ContainerID:"f5e8c5cc491b8df18390011e63a615fb9184d65d1725d5fb76f58322b29724a4", Pod:"whisker-6db8f88dfd-bhsrz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.39.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali93949602eb2", MAC:"76:ee:7b:2c:70:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:47:33.396619 containerd[1874]: 2025-07-06 23:47:33.392 [INFO][4540] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f5e8c5cc491b8df18390011e63a615fb9184d65d1725d5fb76f58322b29724a4" Namespace="calico-system" Pod="whisker-6db8f88dfd-bhsrz" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-whisker--6db8f88dfd--bhsrz-eth0" Jul 6 23:47:33.455339 containerd[1874]: time="2025-07-06T23:47:33.455248739Z" level=info msg="connecting to shim f5e8c5cc491b8df18390011e63a615fb9184d65d1725d5fb76f58322b29724a4" address="unix:///run/containerd/s/5d337e1d10402705cdced00a3ee326f181fa41e704caee4434af687622e886a6" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:47:33.477911 systemd[1]: Started cri-containerd-f5e8c5cc491b8df18390011e63a615fb9184d65d1725d5fb76f58322b29724a4.scope - libcontainer container f5e8c5cc491b8df18390011e63a615fb9184d65d1725d5fb76f58322b29724a4. Jul 6 23:47:33.581733 containerd[1874]: time="2025-07-06T23:47:33.581689168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6db8f88dfd-bhsrz,Uid:dc83cb61-d98f-4807-9bdc-42b0548a1175,Namespace:calico-system,Attempt:0,} returns sandbox id \"f5e8c5cc491b8df18390011e63a615fb9184d65d1725d5fb76f58322b29724a4\"" Jul 6 23:47:33.583346 containerd[1874]: time="2025-07-06T23:47:33.583317013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 6 23:47:33.697581 systemd-networkd[1614]: vxlan.calico: Link UP Jul 6 23:47:33.697590 systemd-networkd[1614]: vxlan.calico: Gained carrier Jul 6 23:47:34.647446 containerd[1874]: time="2025-07-06T23:47:34.647310738Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:34.648528 kubelet[3343]: I0706 23:47:34.648499 3343 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f26caff-9208-4f4d-8001-680f5c801d5b" path="/var/lib/kubelet/pods/0f26caff-9208-4f4d-8001-680f5c801d5b/volumes" Jul 6 23:47:34.649862 containerd[1874]: time="2025-07-06T23:47:34.649833177Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 6 23:47:34.652231 containerd[1874]: time="2025-07-06T23:47:34.652187836Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:34.657760 containerd[1874]: time="2025-07-06T23:47:34.657659981Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:34.658399 containerd[1874]: time="2025-07-06T23:47:34.658380730Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.075028571s" Jul 6 23:47:34.658520 containerd[1874]: time="2025-07-06T23:47:34.658445580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 6 23:47:34.660783 containerd[1874]: time="2025-07-06T23:47:34.660465500Z" level=info msg="CreateContainer within sandbox \"f5e8c5cc491b8df18390011e63a615fb9184d65d1725d5fb76f58322b29724a4\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 6 23:47:34.683696 containerd[1874]: time="2025-07-06T23:47:34.683668849Z" level=info msg="Container 54612fffda79ad768ad91bd6d5acd29bb7a8963d8ad4096c0bad46a2c90dc615: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:47:34.686319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1337177210.mount: Deactivated successfully. Jul 6 23:47:34.704526 containerd[1874]: time="2025-07-06T23:47:34.704478938Z" level=info msg="CreateContainer within sandbox \"f5e8c5cc491b8df18390011e63a615fb9184d65d1725d5fb76f58322b29724a4\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"54612fffda79ad768ad91bd6d5acd29bb7a8963d8ad4096c0bad46a2c90dc615\"" Jul 6 23:47:34.706450 containerd[1874]: time="2025-07-06T23:47:34.706404800Z" level=info msg="StartContainer for \"54612fffda79ad768ad91bd6d5acd29bb7a8963d8ad4096c0bad46a2c90dc615\"" Jul 6 23:47:34.707403 containerd[1874]: time="2025-07-06T23:47:34.707330771Z" level=info msg="connecting to shim 54612fffda79ad768ad91bd6d5acd29bb7a8963d8ad4096c0bad46a2c90dc615" address="unix:///run/containerd/s/5d337e1d10402705cdced00a3ee326f181fa41e704caee4434af687622e886a6" protocol=ttrpc version=3 Jul 6 23:47:34.726875 systemd[1]: Started cri-containerd-54612fffda79ad768ad91bd6d5acd29bb7a8963d8ad4096c0bad46a2c90dc615.scope - libcontainer container 54612fffda79ad768ad91bd6d5acd29bb7a8963d8ad4096c0bad46a2c90dc615. Jul 6 23:47:34.760934 containerd[1874]: time="2025-07-06T23:47:34.760879909Z" level=info msg="StartContainer for \"54612fffda79ad768ad91bd6d5acd29bb7a8963d8ad4096c0bad46a2c90dc615\" returns successfully" Jul 6 23:47:34.762799 containerd[1874]: time="2025-07-06T23:47:34.762241419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 6 23:47:35.124901 systemd-networkd[1614]: vxlan.calico: Gained IPv6LL Jul 6 23:47:35.316913 systemd-networkd[1614]: cali93949602eb2: Gained IPv6LL Jul 6 23:47:36.356181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount146811161.mount: Deactivated successfully. Jul 6 23:47:36.647822 containerd[1874]: time="2025-07-06T23:47:36.647702697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-666c858d6f-x5lwm,Uid:eb11f795-1482-49f1-9bdd-cb95b27924fb,Namespace:calico-system,Attempt:0,}" Jul 6 23:47:36.907968 containerd[1874]: time="2025-07-06T23:47:36.906714677Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:36.912170 containerd[1874]: time="2025-07-06T23:47:36.911880791Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 6 23:47:36.922190 containerd[1874]: time="2025-07-06T23:47:36.922141983Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:36.929355 systemd-networkd[1614]: cali5b6d8cfcf3f: Link UP Jul 6 23:47:36.929919 systemd-networkd[1614]: cali5b6d8cfcf3f: Gained carrier Jul 6 23:47:36.934813 containerd[1874]: time="2025-07-06T23:47:36.933981004Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:36.937783 containerd[1874]: time="2025-07-06T23:47:36.937731334Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 2.174964868s" Jul 6 23:47:36.937920 containerd[1874]: time="2025-07-06T23:47:36.937893202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 6 23:47:36.944170 containerd[1874]: time="2025-07-06T23:47:36.944107601Z" level=info msg="CreateContainer within sandbox \"f5e8c5cc491b8df18390011e63a615fb9184d65d1725d5fb76f58322b29724a4\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 6 23:47:36.947371 containerd[1874]: 2025-07-06 23:47:36.872 [INFO][4766] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--a--ba147b1783-k8s-calico--kube--controllers--666c858d6f--x5lwm-eth0 calico-kube-controllers-666c858d6f- calico-system eb11f795-1482-49f1-9bdd-cb95b27924fb 832 0 2025-07-06 23:47:16 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:666c858d6f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4344.1.1-a-ba147b1783 calico-kube-controllers-666c858d6f-x5lwm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5b6d8cfcf3f [] [] }} ContainerID="b0f04d610fee27e75bb4b144040516d19c6de6214adbd74897ff4f255bc7e86c" Namespace="calico-system" Pod="calico-kube-controllers-666c858d6f-x5lwm" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-calico--kube--controllers--666c858d6f--x5lwm-" Jul 6 23:47:36.947371 containerd[1874]: 2025-07-06 23:47:36.872 [INFO][4766] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b0f04d610fee27e75bb4b144040516d19c6de6214adbd74897ff4f255bc7e86c" Namespace="calico-system" Pod="calico-kube-controllers-666c858d6f-x5lwm" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-calico--kube--controllers--666c858d6f--x5lwm-eth0" Jul 6 23:47:36.947371 containerd[1874]: 2025-07-06 23:47:36.888 [INFO][4783] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b0f04d610fee27e75bb4b144040516d19c6de6214adbd74897ff4f255bc7e86c" HandleID="k8s-pod-network.b0f04d610fee27e75bb4b144040516d19c6de6214adbd74897ff4f255bc7e86c" Workload="ci--4344.1.1--a--ba147b1783-k8s-calico--kube--controllers--666c858d6f--x5lwm-eth0" Jul 6 23:47:36.947519 containerd[1874]: 2025-07-06 23:47:36.888 [INFO][4783] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b0f04d610fee27e75bb4b144040516d19c6de6214adbd74897ff4f255bc7e86c" HandleID="k8s-pod-network.b0f04d610fee27e75bb4b144040516d19c6de6214adbd74897ff4f255bc7e86c" Workload="ci--4344.1.1--a--ba147b1783-k8s-calico--kube--controllers--666c858d6f--x5lwm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b0f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.1-a-ba147b1783", "pod":"calico-kube-controllers-666c858d6f-x5lwm", "timestamp":"2025-07-06 23:47:36.888410154 +0000 UTC"}, Hostname:"ci-4344.1.1-a-ba147b1783", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:47:36.947519 containerd[1874]: 2025-07-06 23:47:36.888 [INFO][4783] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:47:36.947519 containerd[1874]: 2025-07-06 23:47:36.888 [INFO][4783] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:47:36.947519 containerd[1874]: 2025-07-06 23:47:36.888 [INFO][4783] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-a-ba147b1783' Jul 6 23:47:36.947519 containerd[1874]: 2025-07-06 23:47:36.894 [INFO][4783] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b0f04d610fee27e75bb4b144040516d19c6de6214adbd74897ff4f255bc7e86c" host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:36.947519 containerd[1874]: 2025-07-06 23:47:36.898 [INFO][4783] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:36.947519 containerd[1874]: 2025-07-06 23:47:36.902 [INFO][4783] ipam/ipam.go 511: Trying affinity for 192.168.39.0/26 host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:36.947519 containerd[1874]: 2025-07-06 23:47:36.904 [INFO][4783] ipam/ipam.go 158: Attempting to load block cidr=192.168.39.0/26 host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:36.947519 containerd[1874]: 2025-07-06 23:47:36.905 [INFO][4783] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.39.0/26 host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:36.947653 containerd[1874]: 2025-07-06 23:47:36.905 [INFO][4783] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.39.0/26 handle="k8s-pod-network.b0f04d610fee27e75bb4b144040516d19c6de6214adbd74897ff4f255bc7e86c" host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:36.947653 containerd[1874]: 2025-07-06 23:47:36.908 [INFO][4783] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b0f04d610fee27e75bb4b144040516d19c6de6214adbd74897ff4f255bc7e86c Jul 6 23:47:36.947653 containerd[1874]: 2025-07-06 23:47:36.912 [INFO][4783] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.39.0/26 handle="k8s-pod-network.b0f04d610fee27e75bb4b144040516d19c6de6214adbd74897ff4f255bc7e86c" host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:36.947653 containerd[1874]: 2025-07-06 23:47:36.922 [INFO][4783] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.39.2/26] block=192.168.39.0/26 handle="k8s-pod-network.b0f04d610fee27e75bb4b144040516d19c6de6214adbd74897ff4f255bc7e86c" host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:36.947653 containerd[1874]: 2025-07-06 23:47:36.923 [INFO][4783] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.39.2/26] handle="k8s-pod-network.b0f04d610fee27e75bb4b144040516d19c6de6214adbd74897ff4f255bc7e86c" host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:36.947653 containerd[1874]: 2025-07-06 23:47:36.923 [INFO][4783] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:47:36.947653 containerd[1874]: 2025-07-06 23:47:36.923 [INFO][4783] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.39.2/26] IPv6=[] ContainerID="b0f04d610fee27e75bb4b144040516d19c6de6214adbd74897ff4f255bc7e86c" HandleID="k8s-pod-network.b0f04d610fee27e75bb4b144040516d19c6de6214adbd74897ff4f255bc7e86c" Workload="ci--4344.1.1--a--ba147b1783-k8s-calico--kube--controllers--666c858d6f--x5lwm-eth0" Jul 6 23:47:36.948779 containerd[1874]: 2025-07-06 23:47:36.924 [INFO][4766] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b0f04d610fee27e75bb4b144040516d19c6de6214adbd74897ff4f255bc7e86c" Namespace="calico-system" Pod="calico-kube-controllers-666c858d6f-x5lwm" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-calico--kube--controllers--666c858d6f--x5lwm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--ba147b1783-k8s-calico--kube--controllers--666c858d6f--x5lwm-eth0", GenerateName:"calico-kube-controllers-666c858d6f-", Namespace:"calico-system", SelfLink:"", UID:"eb11f795-1482-49f1-9bdd-cb95b27924fb", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 47, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"666c858d6f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-ba147b1783", ContainerID:"", Pod:"calico-kube-controllers-666c858d6f-x5lwm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.39.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5b6d8cfcf3f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:47:36.949259 containerd[1874]: 2025-07-06 23:47:36.925 [INFO][4766] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.39.2/32] ContainerID="b0f04d610fee27e75bb4b144040516d19c6de6214adbd74897ff4f255bc7e86c" Namespace="calico-system" Pod="calico-kube-controllers-666c858d6f-x5lwm" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-calico--kube--controllers--666c858d6f--x5lwm-eth0" Jul 6 23:47:36.949259 containerd[1874]: 2025-07-06 23:47:36.925 [INFO][4766] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5b6d8cfcf3f ContainerID="b0f04d610fee27e75bb4b144040516d19c6de6214adbd74897ff4f255bc7e86c" Namespace="calico-system" Pod="calico-kube-controllers-666c858d6f-x5lwm" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-calico--kube--controllers--666c858d6f--x5lwm-eth0" Jul 6 23:47:36.949259 containerd[1874]: 2025-07-06 23:47:36.930 [INFO][4766] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b0f04d610fee27e75bb4b144040516d19c6de6214adbd74897ff4f255bc7e86c" Namespace="calico-system" Pod="calico-kube-controllers-666c858d6f-x5lwm" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-calico--kube--controllers--666c858d6f--x5lwm-eth0" Jul 6 23:47:36.949399 containerd[1874]: 2025-07-06 23:47:36.931 [INFO][4766] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b0f04d610fee27e75bb4b144040516d19c6de6214adbd74897ff4f255bc7e86c" Namespace="calico-system" Pod="calico-kube-controllers-666c858d6f-x5lwm" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-calico--kube--controllers--666c858d6f--x5lwm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--ba147b1783-k8s-calico--kube--controllers--666c858d6f--x5lwm-eth0", GenerateName:"calico-kube-controllers-666c858d6f-", Namespace:"calico-system", SelfLink:"", UID:"eb11f795-1482-49f1-9bdd-cb95b27924fb", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 47, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"666c858d6f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-ba147b1783", ContainerID:"b0f04d610fee27e75bb4b144040516d19c6de6214adbd74897ff4f255bc7e86c", Pod:"calico-kube-controllers-666c858d6f-x5lwm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.39.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5b6d8cfcf3f", MAC:"e2:0a:2a:65:de:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:47:36.949461 containerd[1874]: 2025-07-06 23:47:36.943 [INFO][4766] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b0f04d610fee27e75bb4b144040516d19c6de6214adbd74897ff4f255bc7e86c" Namespace="calico-system" Pod="calico-kube-controllers-666c858d6f-x5lwm" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-calico--kube--controllers--666c858d6f--x5lwm-eth0" Jul 6 23:47:37.006273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3983240293.mount: Deactivated successfully. Jul 6 23:47:37.008114 containerd[1874]: time="2025-07-06T23:47:37.007945076Z" level=info msg="Container b6d568620f8685f8be6a1582b2092013d7a3497139c092d6a813a251468bc53f: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:47:37.041006 containerd[1874]: time="2025-07-06T23:47:37.040967485Z" level=info msg="CreateContainer within sandbox \"f5e8c5cc491b8df18390011e63a615fb9184d65d1725d5fb76f58322b29724a4\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"b6d568620f8685f8be6a1582b2092013d7a3497139c092d6a813a251468bc53f\"" Jul 6 23:47:37.044245 containerd[1874]: time="2025-07-06T23:47:37.044211784Z" level=info msg="StartContainer for \"b6d568620f8685f8be6a1582b2092013d7a3497139c092d6a813a251468bc53f\"" Jul 6 23:47:37.046133 containerd[1874]: time="2025-07-06T23:47:37.046079005Z" level=info msg="connecting to shim b0f04d610fee27e75bb4b144040516d19c6de6214adbd74897ff4f255bc7e86c" address="unix:///run/containerd/s/964af447edad02a601bc970116a9b092c6e01db90df092fb3d1093f92f5b699b" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:47:37.046436 containerd[1874]: time="2025-07-06T23:47:37.046412878Z" level=info msg="connecting to shim b6d568620f8685f8be6a1582b2092013d7a3497139c092d6a813a251468bc53f" address="unix:///run/containerd/s/5d337e1d10402705cdced00a3ee326f181fa41e704caee4434af687622e886a6" protocol=ttrpc version=3 Jul 6 23:47:37.063909 systemd[1]: Started cri-containerd-b6d568620f8685f8be6a1582b2092013d7a3497139c092d6a813a251468bc53f.scope - libcontainer container b6d568620f8685f8be6a1582b2092013d7a3497139c092d6a813a251468bc53f. Jul 6 23:47:37.067546 systemd[1]: Started cri-containerd-b0f04d610fee27e75bb4b144040516d19c6de6214adbd74897ff4f255bc7e86c.scope - libcontainer container b0f04d610fee27e75bb4b144040516d19c6de6214adbd74897ff4f255bc7e86c. Jul 6 23:47:37.121512 containerd[1874]: time="2025-07-06T23:47:37.121476429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-666c858d6f-x5lwm,Uid:eb11f795-1482-49f1-9bdd-cb95b27924fb,Namespace:calico-system,Attempt:0,} returns sandbox id \"b0f04d610fee27e75bb4b144040516d19c6de6214adbd74897ff4f255bc7e86c\"" Jul 6 23:47:37.123786 containerd[1874]: time="2025-07-06T23:47:37.123577128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 6 23:47:37.123875 containerd[1874]: time="2025-07-06T23:47:37.123843216Z" level=info msg="StartContainer for \"b6d568620f8685f8be6a1582b2092013d7a3497139c092d6a813a251468bc53f\" returns successfully" Jul 6 23:47:37.647621 containerd[1874]: time="2025-07-06T23:47:37.647577146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8g56m,Uid:0de0c5ca-daa8-43f4-9a06-b7ef6f8f0c67,Namespace:kube-system,Attempt:0,}" Jul 6 23:47:37.647839 containerd[1874]: time="2025-07-06T23:47:37.647581914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-4wg5n,Uid:715ecb67-8a9d-4d9a-94e6-df816364bfe3,Namespace:calico-system,Attempt:0,}" Jul 6 23:47:37.648076 containerd[1874]: time="2025-07-06T23:47:37.647612707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cc56c86c5-67km5,Uid:de6570c3-c974-4754-b6ac-36c815cefd46,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:47:37.648219 containerd[1874]: time="2025-07-06T23:47:37.647650556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tnbxw,Uid:4e241bd5-4cc4-4ff9-83ce-48a34c457465,Namespace:calico-system,Attempt:0,}" Jul 6 23:47:37.648219 containerd[1874]: time="2025-07-06T23:47:37.647668700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ghldx,Uid:578e673d-a14d-48d0-98ff-91d8b46f26fc,Namespace:kube-system,Attempt:0,}" Jul 6 23:47:37.823801 kubelet[3343]: I0706 23:47:37.823731 3343 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6db8f88dfd-bhsrz" podStartSLOduration=2.466527134 podStartE2EDuration="5.823715076s" podCreationTimestamp="2025-07-06 23:47:32 +0000 UTC" firstStartedPulling="2025-07-06 23:47:33.582914354 +0000 UTC m=+43.022916346" lastFinishedPulling="2025-07-06 23:47:36.940102296 +0000 UTC m=+46.380104288" observedRunningTime="2025-07-06 23:47:37.823585152 +0000 UTC m=+47.263587144" watchObservedRunningTime="2025-07-06 23:47:37.823715076 +0000 UTC m=+47.263717068" Jul 6 23:47:38.008010 systemd-networkd[1614]: cali76ad1ffe498: Link UP Jul 6 23:47:38.010048 systemd-networkd[1614]: cali76ad1ffe498: Gained carrier Jul 6 23:47:38.027234 containerd[1874]: 2025-07-06 23:47:37.727 [INFO][4881] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--a--ba147b1783-k8s-goldmane--768f4c5c69--4wg5n-eth0 goldmane-768f4c5c69- calico-system 715ecb67-8a9d-4d9a-94e6-df816364bfe3 834 0 2025-07-06 23:47:16 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4344.1.1-a-ba147b1783 goldmane-768f4c5c69-4wg5n eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali76ad1ffe498 [] [] }} ContainerID="cd44587897ac0960df5892dde44c6d3423dba16372b1857502c28722ed15a547" Namespace="calico-system" Pod="goldmane-768f4c5c69-4wg5n" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-goldmane--768f4c5c69--4wg5n-" Jul 6 23:47:38.027234 containerd[1874]: 2025-07-06 23:47:37.728 [INFO][4881] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cd44587897ac0960df5892dde44c6d3423dba16372b1857502c28722ed15a547" Namespace="calico-system" Pod="goldmane-768f4c5c69-4wg5n" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-goldmane--768f4c5c69--4wg5n-eth0" Jul 6 23:47:38.027234 containerd[1874]: 2025-07-06 23:47:37.795 [INFO][4921] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd44587897ac0960df5892dde44c6d3423dba16372b1857502c28722ed15a547" HandleID="k8s-pod-network.cd44587897ac0960df5892dde44c6d3423dba16372b1857502c28722ed15a547" Workload="ci--4344.1.1--a--ba147b1783-k8s-goldmane--768f4c5c69--4wg5n-eth0" Jul 6 23:47:38.027868 containerd[1874]: 2025-07-06 23:47:37.796 [INFO][4921] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cd44587897ac0960df5892dde44c6d3423dba16372b1857502c28722ed15a547" HandleID="k8s-pod-network.cd44587897ac0960df5892dde44c6d3423dba16372b1857502c28722ed15a547" Workload="ci--4344.1.1--a--ba147b1783-k8s-goldmane--768f4c5c69--4wg5n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400039e140), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.1-a-ba147b1783", "pod":"goldmane-768f4c5c69-4wg5n", "timestamp":"2025-07-06 23:47:37.795199834 +0000 UTC"}, Hostname:"ci-4344.1.1-a-ba147b1783", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:47:38.027868 containerd[1874]: 2025-07-06 23:47:37.796 [INFO][4921] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:47:38.027868 containerd[1874]: 2025-07-06 23:47:37.796 [INFO][4921] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:47:38.027868 containerd[1874]: 2025-07-06 23:47:37.796 [INFO][4921] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-a-ba147b1783' Jul 6 23:47:38.027868 containerd[1874]: 2025-07-06 23:47:37.808 [INFO][4921] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cd44587897ac0960df5892dde44c6d3423dba16372b1857502c28722ed15a547" host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.027868 containerd[1874]: 2025-07-06 23:47:37.825 [INFO][4921] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.027868 containerd[1874]: 2025-07-06 23:47:37.846 [INFO][4921] ipam/ipam.go 511: Trying affinity for 192.168.39.0/26 host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.027868 containerd[1874]: 2025-07-06 23:47:37.853 [INFO][4921] ipam/ipam.go 158: Attempting to load block cidr=192.168.39.0/26 host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.027868 containerd[1874]: 2025-07-06 23:47:37.856 [INFO][4921] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.39.0/26 host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.028016 containerd[1874]: 2025-07-06 23:47:37.856 [INFO][4921] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.39.0/26 handle="k8s-pod-network.cd44587897ac0960df5892dde44c6d3423dba16372b1857502c28722ed15a547" host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.028016 containerd[1874]: 2025-07-06 23:47:37.868 [INFO][4921] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cd44587897ac0960df5892dde44c6d3423dba16372b1857502c28722ed15a547 Jul 6 23:47:38.028016 containerd[1874]: 2025-07-06 23:47:37.887 [INFO][4921] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.39.0/26 handle="k8s-pod-network.cd44587897ac0960df5892dde44c6d3423dba16372b1857502c28722ed15a547" host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.028016 containerd[1874]: 2025-07-06 23:47:37.998 [INFO][4921] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.39.3/26] block=192.168.39.0/26 handle="k8s-pod-network.cd44587897ac0960df5892dde44c6d3423dba16372b1857502c28722ed15a547" host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.028016 containerd[1874]: 2025-07-06 23:47:37.999 [INFO][4921] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.39.3/26] handle="k8s-pod-network.cd44587897ac0960df5892dde44c6d3423dba16372b1857502c28722ed15a547" host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.028016 containerd[1874]: 2025-07-06 23:47:37.999 [INFO][4921] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:47:38.028016 containerd[1874]: 2025-07-06 23:47:37.999 [INFO][4921] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.39.3/26] IPv6=[] ContainerID="cd44587897ac0960df5892dde44c6d3423dba16372b1857502c28722ed15a547" HandleID="k8s-pod-network.cd44587897ac0960df5892dde44c6d3423dba16372b1857502c28722ed15a547" Workload="ci--4344.1.1--a--ba147b1783-k8s-goldmane--768f4c5c69--4wg5n-eth0" Jul 6 23:47:38.028111 containerd[1874]: 2025-07-06 23:47:38.000 [INFO][4881] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cd44587897ac0960df5892dde44c6d3423dba16372b1857502c28722ed15a547" Namespace="calico-system" Pod="goldmane-768f4c5c69-4wg5n" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-goldmane--768f4c5c69--4wg5n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--ba147b1783-k8s-goldmane--768f4c5c69--4wg5n-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"715ecb67-8a9d-4d9a-94e6-df816364bfe3", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 47, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-ba147b1783", ContainerID:"", Pod:"goldmane-768f4c5c69-4wg5n", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.39.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali76ad1ffe498", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:47:38.028111 containerd[1874]: 2025-07-06 23:47:38.000 [INFO][4881] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.39.3/32] ContainerID="cd44587897ac0960df5892dde44c6d3423dba16372b1857502c28722ed15a547" Namespace="calico-system" Pod="goldmane-768f4c5c69-4wg5n" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-goldmane--768f4c5c69--4wg5n-eth0" Jul 6 23:47:38.028159 containerd[1874]: 2025-07-06 23:47:38.000 [INFO][4881] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali76ad1ffe498 ContainerID="cd44587897ac0960df5892dde44c6d3423dba16372b1857502c28722ed15a547" Namespace="calico-system" Pod="goldmane-768f4c5c69-4wg5n" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-goldmane--768f4c5c69--4wg5n-eth0" Jul 6 23:47:38.028159 containerd[1874]: 2025-07-06 23:47:38.012 [INFO][4881] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cd44587897ac0960df5892dde44c6d3423dba16372b1857502c28722ed15a547" Namespace="calico-system" Pod="goldmane-768f4c5c69-4wg5n" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-goldmane--768f4c5c69--4wg5n-eth0" Jul 6 23:47:38.028188 containerd[1874]: 2025-07-06 23:47:38.012 [INFO][4881] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cd44587897ac0960df5892dde44c6d3423dba16372b1857502c28722ed15a547" Namespace="calico-system" Pod="goldmane-768f4c5c69-4wg5n" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-goldmane--768f4c5c69--4wg5n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--ba147b1783-k8s-goldmane--768f4c5c69--4wg5n-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"715ecb67-8a9d-4d9a-94e6-df816364bfe3", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 47, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-ba147b1783", ContainerID:"cd44587897ac0960df5892dde44c6d3423dba16372b1857502c28722ed15a547", Pod:"goldmane-768f4c5c69-4wg5n", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.39.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali76ad1ffe498", MAC:"0a:b6:e8:04:00:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:47:38.028219 containerd[1874]: 2025-07-06 23:47:38.025 [INFO][4881] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cd44587897ac0960df5892dde44c6d3423dba16372b1857502c28722ed15a547" Namespace="calico-system" Pod="goldmane-768f4c5c69-4wg5n" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-goldmane--768f4c5c69--4wg5n-eth0" Jul 6 23:47:38.065865 systemd-networkd[1614]: cali9a59b23d46b: Link UP Jul 6 23:47:38.066026 systemd-networkd[1614]: cali9a59b23d46b: Gained carrier Jul 6 23:47:38.077538 containerd[1874]: 2025-07-06 23:47:37.742 [INFO][4891] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--a--ba147b1783-k8s-calico--apiserver--5cc56c86c5--67km5-eth0 calico-apiserver-5cc56c86c5- calico-apiserver de6570c3-c974-4754-b6ac-36c815cefd46 835 0 2025-07-06 23:47:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5cc56c86c5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344.1.1-a-ba147b1783 calico-apiserver-5cc56c86c5-67km5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9a59b23d46b [] [] }} ContainerID="7f2590259e9a02a93d5b28c0e6abab2e41ed90183ac2aeb36e6d3c06b3015fbe" Namespace="calico-apiserver" Pod="calico-apiserver-5cc56c86c5-67km5" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-calico--apiserver--5cc56c86c5--67km5-" Jul 6 23:47:38.077538 containerd[1874]: 2025-07-06 23:47:37.742 [INFO][4891] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7f2590259e9a02a93d5b28c0e6abab2e41ed90183ac2aeb36e6d3c06b3015fbe" Namespace="calico-apiserver" Pod="calico-apiserver-5cc56c86c5-67km5" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-calico--apiserver--5cc56c86c5--67km5-eth0" Jul 6 23:47:38.077538 containerd[1874]: 2025-07-06 23:47:37.832 [INFO][4936] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f2590259e9a02a93d5b28c0e6abab2e41ed90183ac2aeb36e6d3c06b3015fbe" HandleID="k8s-pod-network.7f2590259e9a02a93d5b28c0e6abab2e41ed90183ac2aeb36e6d3c06b3015fbe" Workload="ci--4344.1.1--a--ba147b1783-k8s-calico--apiserver--5cc56c86c5--67km5-eth0" Jul 6 23:47:38.077840 containerd[1874]: 2025-07-06 23:47:37.832 [INFO][4936] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7f2590259e9a02a93d5b28c0e6abab2e41ed90183ac2aeb36e6d3c06b3015fbe" HandleID="k8s-pod-network.7f2590259e9a02a93d5b28c0e6abab2e41ed90183ac2aeb36e6d3c06b3015fbe" Workload="ci--4344.1.1--a--ba147b1783-k8s-calico--apiserver--5cc56c86c5--67km5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3b70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344.1.1-a-ba147b1783", "pod":"calico-apiserver-5cc56c86c5-67km5", "timestamp":"2025-07-06 23:47:37.832288277 +0000 UTC"}, Hostname:"ci-4344.1.1-a-ba147b1783", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:47:38.077840 containerd[1874]: 2025-07-06 23:47:37.832 [INFO][4936] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:47:38.077840 containerd[1874]: 2025-07-06 23:47:37.999 [INFO][4936] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:47:38.077840 containerd[1874]: 2025-07-06 23:47:37.999 [INFO][4936] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-a-ba147b1783' Jul 6 23:47:38.077840 containerd[1874]: 2025-07-06 23:47:38.012 [INFO][4936] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7f2590259e9a02a93d5b28c0e6abab2e41ed90183ac2aeb36e6d3c06b3015fbe" host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.077840 containerd[1874]: 2025-07-06 23:47:38.018 [INFO][4936] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.077840 containerd[1874]: 2025-07-06 23:47:38.029 [INFO][4936] ipam/ipam.go 511: Trying affinity for 192.168.39.0/26 host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.077840 containerd[1874]: 2025-07-06 23:47:38.031 [INFO][4936] ipam/ipam.go 158: Attempting to load block cidr=192.168.39.0/26 host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.077840 containerd[1874]: 2025-07-06 23:47:38.033 [INFO][4936] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.39.0/26 host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.078328 containerd[1874]: 2025-07-06 23:47:38.033 [INFO][4936] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.39.0/26 handle="k8s-pod-network.7f2590259e9a02a93d5b28c0e6abab2e41ed90183ac2aeb36e6d3c06b3015fbe" host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.078328 containerd[1874]: 2025-07-06 23:47:38.034 [INFO][4936] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7f2590259e9a02a93d5b28c0e6abab2e41ed90183ac2aeb36e6d3c06b3015fbe Jul 6 23:47:38.078328 containerd[1874]: 2025-07-06 23:47:38.039 [INFO][4936] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.39.0/26 handle="k8s-pod-network.7f2590259e9a02a93d5b28c0e6abab2e41ed90183ac2aeb36e6d3c06b3015fbe" host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.078328 containerd[1874]: 2025-07-06 23:47:38.057 [INFO][4936] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.39.4/26] block=192.168.39.0/26 handle="k8s-pod-network.7f2590259e9a02a93d5b28c0e6abab2e41ed90183ac2aeb36e6d3c06b3015fbe" host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.078328 containerd[1874]: 2025-07-06 23:47:38.057 [INFO][4936] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.39.4/26] handle="k8s-pod-network.7f2590259e9a02a93d5b28c0e6abab2e41ed90183ac2aeb36e6d3c06b3015fbe" host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.078328 containerd[1874]: 2025-07-06 23:47:38.057 [INFO][4936] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:47:38.078328 containerd[1874]: 2025-07-06 23:47:38.058 [INFO][4936] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.39.4/26] IPv6=[] ContainerID="7f2590259e9a02a93d5b28c0e6abab2e41ed90183ac2aeb36e6d3c06b3015fbe" HandleID="k8s-pod-network.7f2590259e9a02a93d5b28c0e6abab2e41ed90183ac2aeb36e6d3c06b3015fbe" Workload="ci--4344.1.1--a--ba147b1783-k8s-calico--apiserver--5cc56c86c5--67km5-eth0" Jul 6 23:47:38.078465 containerd[1874]: 2025-07-06 23:47:38.060 [INFO][4891] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7f2590259e9a02a93d5b28c0e6abab2e41ed90183ac2aeb36e6d3c06b3015fbe" Namespace="calico-apiserver" Pod="calico-apiserver-5cc56c86c5-67km5" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-calico--apiserver--5cc56c86c5--67km5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--ba147b1783-k8s-calico--apiserver--5cc56c86c5--67km5-eth0", GenerateName:"calico-apiserver-5cc56c86c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"de6570c3-c974-4754-b6ac-36c815cefd46", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 47, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cc56c86c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-ba147b1783", ContainerID:"", Pod:"calico-apiserver-5cc56c86c5-67km5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.39.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9a59b23d46b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:47:38.078514 containerd[1874]: 2025-07-06 23:47:38.060 [INFO][4891] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.39.4/32] ContainerID="7f2590259e9a02a93d5b28c0e6abab2e41ed90183ac2aeb36e6d3c06b3015fbe" Namespace="calico-apiserver" Pod="calico-apiserver-5cc56c86c5-67km5" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-calico--apiserver--5cc56c86c5--67km5-eth0" Jul 6 23:47:38.078514 containerd[1874]: 2025-07-06 23:47:38.060 [INFO][4891] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9a59b23d46b ContainerID="7f2590259e9a02a93d5b28c0e6abab2e41ed90183ac2aeb36e6d3c06b3015fbe" Namespace="calico-apiserver" Pod="calico-apiserver-5cc56c86c5-67km5" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-calico--apiserver--5cc56c86c5--67km5-eth0" Jul 6 23:47:38.078514 containerd[1874]: 2025-07-06 23:47:38.062 [INFO][4891] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7f2590259e9a02a93d5b28c0e6abab2e41ed90183ac2aeb36e6d3c06b3015fbe" Namespace="calico-apiserver" Pod="calico-apiserver-5cc56c86c5-67km5" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-calico--apiserver--5cc56c86c5--67km5-eth0" Jul 6 23:47:38.078563 containerd[1874]: 2025-07-06 23:47:38.062 [INFO][4891] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7f2590259e9a02a93d5b28c0e6abab2e41ed90183ac2aeb36e6d3c06b3015fbe" Namespace="calico-apiserver" Pod="calico-apiserver-5cc56c86c5-67km5" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-calico--apiserver--5cc56c86c5--67km5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--ba147b1783-k8s-calico--apiserver--5cc56c86c5--67km5-eth0", GenerateName:"calico-apiserver-5cc56c86c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"de6570c3-c974-4754-b6ac-36c815cefd46", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 47, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cc56c86c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-ba147b1783", ContainerID:"7f2590259e9a02a93d5b28c0e6abab2e41ed90183ac2aeb36e6d3c06b3015fbe", Pod:"calico-apiserver-5cc56c86c5-67km5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.39.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9a59b23d46b", MAC:"fe:5d:8f:e2:8c:ff", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:47:38.078610 containerd[1874]: 2025-07-06 23:47:38.075 [INFO][4891] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7f2590259e9a02a93d5b28c0e6abab2e41ed90183ac2aeb36e6d3c06b3015fbe" Namespace="calico-apiserver" Pod="calico-apiserver-5cc56c86c5-67km5" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-calico--apiserver--5cc56c86c5--67km5-eth0" Jul 6 23:47:38.133309 containerd[1874]: time="2025-07-06T23:47:38.133200068Z" level=info msg="connecting to shim cd44587897ac0960df5892dde44c6d3423dba16372b1857502c28722ed15a547" address="unix:///run/containerd/s/cd6f8dfee4e781659d48374b4572c9d2efa60746e4381d154c244afe2f60a999" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:47:38.154888 systemd[1]: Started cri-containerd-cd44587897ac0960df5892dde44c6d3423dba16372b1857502c28722ed15a547.scope - libcontainer container cd44587897ac0960df5892dde44c6d3423dba16372b1857502c28722ed15a547. Jul 6 23:47:38.187717 systemd-networkd[1614]: cali3071520e732: Link UP Jul 6 23:47:38.188118 systemd-networkd[1614]: cali3071520e732: Gained carrier Jul 6 23:47:38.200919 containerd[1874]: time="2025-07-06T23:47:38.200607740Z" level=info msg="connecting to shim 7f2590259e9a02a93d5b28c0e6abab2e41ed90183ac2aeb36e6d3c06b3015fbe" address="unix:///run/containerd/s/64b847b0b31c2327d4efc59b9ebad371221f6e5e6a95a1dac1c0b3b5ae339cb7" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:47:38.220811 containerd[1874]: 2025-07-06 23:47:37.774 [INFO][4903] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--a--ba147b1783-k8s-csi--node--driver--tnbxw-eth0 csi-node-driver- calico-system 4e241bd5-4cc4-4ff9-83ce-48a34c457465 723 0 2025-07-06 23:47:16 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4344.1.1-a-ba147b1783 csi-node-driver-tnbxw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3071520e732 [] [] }} ContainerID="a48c8b355ffde99c548328dcb4bfca680f4cbf8da321d86eb3ea31ee8c6eaaa1" Namespace="calico-system" Pod="csi-node-driver-tnbxw" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-csi--node--driver--tnbxw-" Jul 6 23:47:38.220811 containerd[1874]: 2025-07-06 23:47:37.774 [INFO][4903] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a48c8b355ffde99c548328dcb4bfca680f4cbf8da321d86eb3ea31ee8c6eaaa1" Namespace="calico-system" Pod="csi-node-driver-tnbxw" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-csi--node--driver--tnbxw-eth0" Jul 6 23:47:38.220811 containerd[1874]: 2025-07-06 23:47:37.853 [INFO][4954] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a48c8b355ffde99c548328dcb4bfca680f4cbf8da321d86eb3ea31ee8c6eaaa1" HandleID="k8s-pod-network.a48c8b355ffde99c548328dcb4bfca680f4cbf8da321d86eb3ea31ee8c6eaaa1" Workload="ci--4344.1.1--a--ba147b1783-k8s-csi--node--driver--tnbxw-eth0" Jul 6 23:47:38.220987 containerd[1874]: 2025-07-06 23:47:37.853 [INFO][4954] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a48c8b355ffde99c548328dcb4bfca680f4cbf8da321d86eb3ea31ee8c6eaaa1" HandleID="k8s-pod-network.a48c8b355ffde99c548328dcb4bfca680f4cbf8da321d86eb3ea31ee8c6eaaa1" Workload="ci--4344.1.1--a--ba147b1783-k8s-csi--node--driver--tnbxw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000395920), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.1-a-ba147b1783", "pod":"csi-node-driver-tnbxw", "timestamp":"2025-07-06 23:47:37.853282211 +0000 UTC"}, Hostname:"ci-4344.1.1-a-ba147b1783", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:47:38.220987 containerd[1874]: 2025-07-06 23:47:37.853 [INFO][4954] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:47:38.220987 containerd[1874]: 2025-07-06 23:47:38.057 [INFO][4954] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:47:38.220987 containerd[1874]: 2025-07-06 23:47:38.058 [INFO][4954] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-a-ba147b1783' Jul 6 23:47:38.220987 containerd[1874]: 2025-07-06 23:47:38.111 [INFO][4954] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a48c8b355ffde99c548328dcb4bfca680f4cbf8da321d86eb3ea31ee8c6eaaa1" host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.220987 containerd[1874]: 2025-07-06 23:47:38.119 [INFO][4954] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.220987 containerd[1874]: 2025-07-06 23:47:38.130 [INFO][4954] ipam/ipam.go 511: Trying affinity for 192.168.39.0/26 host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.220987 containerd[1874]: 2025-07-06 23:47:38.132 [INFO][4954] ipam/ipam.go 158: Attempting to load block cidr=192.168.39.0/26 host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.220987 containerd[1874]: 2025-07-06 23:47:38.135 [INFO][4954] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.39.0/26 host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.221129 containerd[1874]: 2025-07-06 23:47:38.135 [INFO][4954] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.39.0/26 handle="k8s-pod-network.a48c8b355ffde99c548328dcb4bfca680f4cbf8da321d86eb3ea31ee8c6eaaa1" host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.221129 containerd[1874]: 2025-07-06 23:47:38.138 [INFO][4954] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a48c8b355ffde99c548328dcb4bfca680f4cbf8da321d86eb3ea31ee8c6eaaa1 Jul 6 23:47:38.221129 containerd[1874]: 2025-07-06 23:47:38.147 [INFO][4954] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.39.0/26 handle="k8s-pod-network.a48c8b355ffde99c548328dcb4bfca680f4cbf8da321d86eb3ea31ee8c6eaaa1" host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.221129 containerd[1874]: 2025-07-06 23:47:38.169 [INFO][4954] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.39.5/26] block=192.168.39.0/26 handle="k8s-pod-network.a48c8b355ffde99c548328dcb4bfca680f4cbf8da321d86eb3ea31ee8c6eaaa1" host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.221129 containerd[1874]: 2025-07-06 23:47:38.169 [INFO][4954] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.39.5/26] handle="k8s-pod-network.a48c8b355ffde99c548328dcb4bfca680f4cbf8da321d86eb3ea31ee8c6eaaa1" host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.221129 containerd[1874]: 2025-07-06 23:47:38.170 [INFO][4954] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:47:38.221129 containerd[1874]: 2025-07-06 23:47:38.170 [INFO][4954] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.39.5/26] IPv6=[] ContainerID="a48c8b355ffde99c548328dcb4bfca680f4cbf8da321d86eb3ea31ee8c6eaaa1" HandleID="k8s-pod-network.a48c8b355ffde99c548328dcb4bfca680f4cbf8da321d86eb3ea31ee8c6eaaa1" Workload="ci--4344.1.1--a--ba147b1783-k8s-csi--node--driver--tnbxw-eth0" Jul 6 23:47:38.221223 containerd[1874]: 2025-07-06 23:47:38.179 [INFO][4903] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a48c8b355ffde99c548328dcb4bfca680f4cbf8da321d86eb3ea31ee8c6eaaa1" Namespace="calico-system" Pod="csi-node-driver-tnbxw" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-csi--node--driver--tnbxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--ba147b1783-k8s-csi--node--driver--tnbxw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4e241bd5-4cc4-4ff9-83ce-48a34c457465", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 47, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-ba147b1783", ContainerID:"", Pod:"csi-node-driver-tnbxw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.39.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3071520e732", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:47:38.221256 containerd[1874]: 2025-07-06 23:47:38.179 [INFO][4903] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.39.5/32] ContainerID="a48c8b355ffde99c548328dcb4bfca680f4cbf8da321d86eb3ea31ee8c6eaaa1" Namespace="calico-system" Pod="csi-node-driver-tnbxw" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-csi--node--driver--tnbxw-eth0" Jul 6 23:47:38.221256 containerd[1874]: 2025-07-06 23:47:38.179 [INFO][4903] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3071520e732 ContainerID="a48c8b355ffde99c548328dcb4bfca680f4cbf8da321d86eb3ea31ee8c6eaaa1" Namespace="calico-system" Pod="csi-node-driver-tnbxw" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-csi--node--driver--tnbxw-eth0" Jul 6 23:47:38.221256 containerd[1874]: 2025-07-06 23:47:38.190 [INFO][4903] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a48c8b355ffde99c548328dcb4bfca680f4cbf8da321d86eb3ea31ee8c6eaaa1" Namespace="calico-system" Pod="csi-node-driver-tnbxw" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-csi--node--driver--tnbxw-eth0" Jul 6 23:47:38.221297 containerd[1874]: 2025-07-06 23:47:38.194 [INFO][4903] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a48c8b355ffde99c548328dcb4bfca680f4cbf8da321d86eb3ea31ee8c6eaaa1" Namespace="calico-system" Pod="csi-node-driver-tnbxw" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-csi--node--driver--tnbxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--ba147b1783-k8s-csi--node--driver--tnbxw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4e241bd5-4cc4-4ff9-83ce-48a34c457465", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 47, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-ba147b1783", ContainerID:"a48c8b355ffde99c548328dcb4bfca680f4cbf8da321d86eb3ea31ee8c6eaaa1", Pod:"csi-node-driver-tnbxw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.39.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3071520e732", MAC:"f6:6e:97:a2:7b:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:47:38.221330 containerd[1874]: 2025-07-06 23:47:38.214 [INFO][4903] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a48c8b355ffde99c548328dcb4bfca680f4cbf8da321d86eb3ea31ee8c6eaaa1" Namespace="calico-system" Pod="csi-node-driver-tnbxw" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-csi--node--driver--tnbxw-eth0" Jul 6 23:47:38.228293 containerd[1874]: time="2025-07-06T23:47:38.228258005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-4wg5n,Uid:715ecb67-8a9d-4d9a-94e6-df816364bfe3,Namespace:calico-system,Attempt:0,} returns sandbox id \"cd44587897ac0960df5892dde44c6d3423dba16372b1857502c28722ed15a547\"" Jul 6 23:47:38.249398 systemd[1]: Started cri-containerd-7f2590259e9a02a93d5b28c0e6abab2e41ed90183ac2aeb36e6d3c06b3015fbe.scope - libcontainer container 7f2590259e9a02a93d5b28c0e6abab2e41ed90183ac2aeb36e6d3c06b3015fbe. Jul 6 23:47:38.272115 systemd-networkd[1614]: cali6b932223d0b: Link UP Jul 6 23:47:38.273502 systemd-networkd[1614]: cali6b932223d0b: Gained carrier Jul 6 23:47:38.303179 containerd[1874]: 2025-07-06 23:47:37.794 [INFO][4914] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--a--ba147b1783-k8s-coredns--668d6bf9bc--8g56m-eth0 coredns-668d6bf9bc- kube-system 0de0c5ca-daa8-43f4-9a06-b7ef6f8f0c67 836 0 2025-07-06 23:46:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344.1.1-a-ba147b1783 coredns-668d6bf9bc-8g56m eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6b932223d0b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a701b9753ea79b360f4126925e8b3260ccfe70bd026f3cd3e4dbdffd20056bd2" Namespace="kube-system" Pod="coredns-668d6bf9bc-8g56m" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-coredns--668d6bf9bc--8g56m-" Jul 6 23:47:38.303179 containerd[1874]: 2025-07-06 23:47:37.794 [INFO][4914] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a701b9753ea79b360f4126925e8b3260ccfe70bd026f3cd3e4dbdffd20056bd2" Namespace="kube-system" Pod="coredns-668d6bf9bc-8g56m" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-coredns--668d6bf9bc--8g56m-eth0" Jul 6 23:47:38.303179 containerd[1874]: 2025-07-06 23:47:37.854 [INFO][4960] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a701b9753ea79b360f4126925e8b3260ccfe70bd026f3cd3e4dbdffd20056bd2" HandleID="k8s-pod-network.a701b9753ea79b360f4126925e8b3260ccfe70bd026f3cd3e4dbdffd20056bd2" Workload="ci--4344.1.1--a--ba147b1783-k8s-coredns--668d6bf9bc--8g56m-eth0" Jul 6 23:47:38.303487 containerd[1874]: 2025-07-06 23:47:37.854 [INFO][4960] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a701b9753ea79b360f4126925e8b3260ccfe70bd026f3cd3e4dbdffd20056bd2" HandleID="k8s-pod-network.a701b9753ea79b360f4126925e8b3260ccfe70bd026f3cd3e4dbdffd20056bd2" Workload="ci--4344.1.1--a--ba147b1783-k8s-coredns--668d6bf9bc--8g56m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd640), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344.1.1-a-ba147b1783", "pod":"coredns-668d6bf9bc-8g56m", "timestamp":"2025-07-06 23:47:37.854460476 +0000 UTC"}, Hostname:"ci-4344.1.1-a-ba147b1783", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:47:38.303487 containerd[1874]: 2025-07-06 23:47:37.854 [INFO][4960] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:47:38.303487 containerd[1874]: 2025-07-06 23:47:38.170 [INFO][4960] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:47:38.303487 containerd[1874]: 2025-07-06 23:47:38.170 [INFO][4960] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-a-ba147b1783' Jul 6 23:47:38.303487 containerd[1874]: 2025-07-06 23:47:38.212 [INFO][4960] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a701b9753ea79b360f4126925e8b3260ccfe70bd026f3cd3e4dbdffd20056bd2" host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.303487 containerd[1874]: 2025-07-06 23:47:38.220 [INFO][4960] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.303487 containerd[1874]: 2025-07-06 23:47:38.229 [INFO][4960] ipam/ipam.go 511: Trying affinity for 192.168.39.0/26 host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.303487 containerd[1874]: 2025-07-06 23:47:38.237 [INFO][4960] ipam/ipam.go 158: Attempting to load block cidr=192.168.39.0/26 host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.303487 containerd[1874]: 2025-07-06 23:47:38.241 [INFO][4960] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.39.0/26 host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.303679 containerd[1874]: 2025-07-06 23:47:38.242 [INFO][4960] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.39.0/26 handle="k8s-pod-network.a701b9753ea79b360f4126925e8b3260ccfe70bd026f3cd3e4dbdffd20056bd2" host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.303679 containerd[1874]: 2025-07-06 23:47:38.244 [INFO][4960] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a701b9753ea79b360f4126925e8b3260ccfe70bd026f3cd3e4dbdffd20056bd2 Jul 6 23:47:38.303679 containerd[1874]: 2025-07-06 23:47:38.252 [INFO][4960] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.39.0/26 handle="k8s-pod-network.a701b9753ea79b360f4126925e8b3260ccfe70bd026f3cd3e4dbdffd20056bd2" host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.303679 containerd[1874]: 2025-07-06 23:47:38.263 [INFO][4960] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.39.6/26] block=192.168.39.0/26 handle="k8s-pod-network.a701b9753ea79b360f4126925e8b3260ccfe70bd026f3cd3e4dbdffd20056bd2" host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.303679 containerd[1874]: 2025-07-06 23:47:38.263 [INFO][4960] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.39.6/26] handle="k8s-pod-network.a701b9753ea79b360f4126925e8b3260ccfe70bd026f3cd3e4dbdffd20056bd2" host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.303679 containerd[1874]: 2025-07-06 23:47:38.263 [INFO][4960] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:47:38.303679 containerd[1874]: 2025-07-06 23:47:38.263 [INFO][4960] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.39.6/26] IPv6=[] ContainerID="a701b9753ea79b360f4126925e8b3260ccfe70bd026f3cd3e4dbdffd20056bd2" HandleID="k8s-pod-network.a701b9753ea79b360f4126925e8b3260ccfe70bd026f3cd3e4dbdffd20056bd2" Workload="ci--4344.1.1--a--ba147b1783-k8s-coredns--668d6bf9bc--8g56m-eth0" Jul 6 23:47:38.303860 containerd[1874]: 2025-07-06 23:47:38.266 [INFO][4914] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a701b9753ea79b360f4126925e8b3260ccfe70bd026f3cd3e4dbdffd20056bd2" Namespace="kube-system" Pod="coredns-668d6bf9bc-8g56m" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-coredns--668d6bf9bc--8g56m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--ba147b1783-k8s-coredns--668d6bf9bc--8g56m-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0de0c5ca-daa8-43f4-9a06-b7ef6f8f0c67", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 46, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-ba147b1783", ContainerID:"", Pod:"coredns-668d6bf9bc-8g56m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.39.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b932223d0b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:47:38.303860 containerd[1874]: 2025-07-06 23:47:38.266 [INFO][4914] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.39.6/32] ContainerID="a701b9753ea79b360f4126925e8b3260ccfe70bd026f3cd3e4dbdffd20056bd2" Namespace="kube-system" Pod="coredns-668d6bf9bc-8g56m" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-coredns--668d6bf9bc--8g56m-eth0" Jul 6 23:47:38.303860 containerd[1874]: 2025-07-06 23:47:38.266 [INFO][4914] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6b932223d0b ContainerID="a701b9753ea79b360f4126925e8b3260ccfe70bd026f3cd3e4dbdffd20056bd2" Namespace="kube-system" Pod="coredns-668d6bf9bc-8g56m" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-coredns--668d6bf9bc--8g56m-eth0" Jul 6 23:47:38.303860 containerd[1874]: 2025-07-06 23:47:38.278 [INFO][4914] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a701b9753ea79b360f4126925e8b3260ccfe70bd026f3cd3e4dbdffd20056bd2" Namespace="kube-system" Pod="coredns-668d6bf9bc-8g56m" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-coredns--668d6bf9bc--8g56m-eth0" Jul 6 23:47:38.303860 containerd[1874]: 2025-07-06 23:47:38.279 [INFO][4914] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a701b9753ea79b360f4126925e8b3260ccfe70bd026f3cd3e4dbdffd20056bd2" Namespace="kube-system" Pod="coredns-668d6bf9bc-8g56m" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-coredns--668d6bf9bc--8g56m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--ba147b1783-k8s-coredns--668d6bf9bc--8g56m-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0de0c5ca-daa8-43f4-9a06-b7ef6f8f0c67", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 46, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-ba147b1783", ContainerID:"a701b9753ea79b360f4126925e8b3260ccfe70bd026f3cd3e4dbdffd20056bd2", Pod:"coredns-668d6bf9bc-8g56m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.39.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b932223d0b", MAC:"da:f9:f2:e6:67:9b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:47:38.303860 containerd[1874]: 2025-07-06 23:47:38.296 [INFO][4914] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a701b9753ea79b360f4126925e8b3260ccfe70bd026f3cd3e4dbdffd20056bd2" Namespace="kube-system" Pod="coredns-668d6bf9bc-8g56m" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-coredns--668d6bf9bc--8g56m-eth0" Jul 6 23:47:38.319427 containerd[1874]: time="2025-07-06T23:47:38.318831425Z" level=info msg="connecting to shim a48c8b355ffde99c548328dcb4bfca680f4cbf8da321d86eb3ea31ee8c6eaaa1" address="unix:///run/containerd/s/43ef982d34058b5dffa317f115c938833d1f83bbf1dc91adec07a250219ce80c" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:47:38.325968 containerd[1874]: time="2025-07-06T23:47:38.325495100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cc56c86c5-67km5,Uid:de6570c3-c974-4754-b6ac-36c815cefd46,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7f2590259e9a02a93d5b28c0e6abab2e41ed90183ac2aeb36e6d3c06b3015fbe\"" Jul 6 23:47:38.343150 systemd[1]: Started cri-containerd-a48c8b355ffde99c548328dcb4bfca680f4cbf8da321d86eb3ea31ee8c6eaaa1.scope - libcontainer container a48c8b355ffde99c548328dcb4bfca680f4cbf8da321d86eb3ea31ee8c6eaaa1. Jul 6 23:47:38.368939 systemd-networkd[1614]: cali47329cafc3c: Link UP Jul 6 23:47:38.369986 systemd-networkd[1614]: cali47329cafc3c: Gained carrier Jul 6 23:47:38.397028 containerd[1874]: 2025-07-06 23:47:37.828 [INFO][4931] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--a--ba147b1783-k8s-coredns--668d6bf9bc--ghldx-eth0 coredns-668d6bf9bc- kube-system 578e673d-a14d-48d0-98ff-91d8b46f26fc 833 0 2025-07-06 23:46:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344.1.1-a-ba147b1783 coredns-668d6bf9bc-ghldx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali47329cafc3c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fc0c8625af72f0f0e5101181155e6cdad4abf7f2f0a4ea879c64a9d177b85b10" Namespace="kube-system" Pod="coredns-668d6bf9bc-ghldx" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-coredns--668d6bf9bc--ghldx-" Jul 6 23:47:38.397028 containerd[1874]: 2025-07-06 23:47:37.828 [INFO][4931] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fc0c8625af72f0f0e5101181155e6cdad4abf7f2f0a4ea879c64a9d177b85b10" Namespace="kube-system" Pod="coredns-668d6bf9bc-ghldx" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-coredns--668d6bf9bc--ghldx-eth0" Jul 6 23:47:38.397028 containerd[1874]: 2025-07-06 23:47:37.873 [INFO][4972] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fc0c8625af72f0f0e5101181155e6cdad4abf7f2f0a4ea879c64a9d177b85b10" HandleID="k8s-pod-network.fc0c8625af72f0f0e5101181155e6cdad4abf7f2f0a4ea879c64a9d177b85b10" Workload="ci--4344.1.1--a--ba147b1783-k8s-coredns--668d6bf9bc--ghldx-eth0" Jul 6 23:47:38.397028 containerd[1874]: 2025-07-06 23:47:37.873 [INFO][4972] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fc0c8625af72f0f0e5101181155e6cdad4abf7f2f0a4ea879c64a9d177b85b10" HandleID="k8s-pod-network.fc0c8625af72f0f0e5101181155e6cdad4abf7f2f0a4ea879c64a9d177b85b10" Workload="ci--4344.1.1--a--ba147b1783-k8s-coredns--668d6bf9bc--ghldx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d38d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344.1.1-a-ba147b1783", "pod":"coredns-668d6bf9bc-ghldx", "timestamp":"2025-07-06 23:47:37.873678857 +0000 UTC"}, Hostname:"ci-4344.1.1-a-ba147b1783", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:47:38.397028 containerd[1874]: 2025-07-06 23:47:37.873 [INFO][4972] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:47:38.397028 containerd[1874]: 2025-07-06 23:47:38.263 [INFO][4972] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:47:38.397028 containerd[1874]: 2025-07-06 23:47:38.263 [INFO][4972] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-a-ba147b1783' Jul 6 23:47:38.397028 containerd[1874]: 2025-07-06 23:47:38.313 [INFO][4972] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fc0c8625af72f0f0e5101181155e6cdad4abf7f2f0a4ea879c64a9d177b85b10" host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.397028 containerd[1874]: 2025-07-06 23:47:38.321 [INFO][4972] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.397028 containerd[1874]: 2025-07-06 23:47:38.337 [INFO][4972] ipam/ipam.go 511: Trying affinity for 192.168.39.0/26 host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.397028 containerd[1874]: 2025-07-06 23:47:38.342 [INFO][4972] ipam/ipam.go 158: Attempting to load block cidr=192.168.39.0/26 host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.397028 containerd[1874]: 2025-07-06 23:47:38.345 [INFO][4972] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.39.0/26 host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.397028 containerd[1874]: 2025-07-06 23:47:38.345 [INFO][4972] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.39.0/26 handle="k8s-pod-network.fc0c8625af72f0f0e5101181155e6cdad4abf7f2f0a4ea879c64a9d177b85b10" host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.397028 containerd[1874]: 2025-07-06 23:47:38.347 [INFO][4972] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fc0c8625af72f0f0e5101181155e6cdad4abf7f2f0a4ea879c64a9d177b85b10 Jul 6 23:47:38.397028 containerd[1874]: 2025-07-06 23:47:38.353 [INFO][4972] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.39.0/26 handle="k8s-pod-network.fc0c8625af72f0f0e5101181155e6cdad4abf7f2f0a4ea879c64a9d177b85b10" host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.397028 containerd[1874]: 2025-07-06 23:47:38.363 [INFO][4972] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.39.7/26] block=192.168.39.0/26 handle="k8s-pod-network.fc0c8625af72f0f0e5101181155e6cdad4abf7f2f0a4ea879c64a9d177b85b10" host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.397028 containerd[1874]: 2025-07-06 23:47:38.363 [INFO][4972] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.39.7/26] handle="k8s-pod-network.fc0c8625af72f0f0e5101181155e6cdad4abf7f2f0a4ea879c64a9d177b85b10" host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.397028 containerd[1874]: 2025-07-06 23:47:38.363 [INFO][4972] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:47:38.397028 containerd[1874]: 2025-07-06 23:47:38.363 [INFO][4972] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.39.7/26] IPv6=[] ContainerID="fc0c8625af72f0f0e5101181155e6cdad4abf7f2f0a4ea879c64a9d177b85b10" HandleID="k8s-pod-network.fc0c8625af72f0f0e5101181155e6cdad4abf7f2f0a4ea879c64a9d177b85b10" Workload="ci--4344.1.1--a--ba147b1783-k8s-coredns--668d6bf9bc--ghldx-eth0" Jul 6 23:47:38.398519 containerd[1874]: 2025-07-06 23:47:38.365 [INFO][4931] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fc0c8625af72f0f0e5101181155e6cdad4abf7f2f0a4ea879c64a9d177b85b10" Namespace="kube-system" Pod="coredns-668d6bf9bc-ghldx" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-coredns--668d6bf9bc--ghldx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--ba147b1783-k8s-coredns--668d6bf9bc--ghldx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"578e673d-a14d-48d0-98ff-91d8b46f26fc", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 46, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-ba147b1783", ContainerID:"", Pod:"coredns-668d6bf9bc-ghldx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.39.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali47329cafc3c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:47:38.398519 containerd[1874]: 2025-07-06 23:47:38.366 [INFO][4931] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.39.7/32] ContainerID="fc0c8625af72f0f0e5101181155e6cdad4abf7f2f0a4ea879c64a9d177b85b10" Namespace="kube-system" Pod="coredns-668d6bf9bc-ghldx" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-coredns--668d6bf9bc--ghldx-eth0" Jul 6 23:47:38.398519 containerd[1874]: 2025-07-06 23:47:38.366 [INFO][4931] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali47329cafc3c ContainerID="fc0c8625af72f0f0e5101181155e6cdad4abf7f2f0a4ea879c64a9d177b85b10" Namespace="kube-system" Pod="coredns-668d6bf9bc-ghldx" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-coredns--668d6bf9bc--ghldx-eth0" Jul 6 23:47:38.398519 containerd[1874]: 2025-07-06 23:47:38.369 [INFO][4931] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fc0c8625af72f0f0e5101181155e6cdad4abf7f2f0a4ea879c64a9d177b85b10" Namespace="kube-system" Pod="coredns-668d6bf9bc-ghldx" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-coredns--668d6bf9bc--ghldx-eth0" Jul 6 23:47:38.398519 containerd[1874]: 2025-07-06 23:47:38.370 [INFO][4931] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fc0c8625af72f0f0e5101181155e6cdad4abf7f2f0a4ea879c64a9d177b85b10" Namespace="kube-system" Pod="coredns-668d6bf9bc-ghldx" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-coredns--668d6bf9bc--ghldx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--ba147b1783-k8s-coredns--668d6bf9bc--ghldx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"578e673d-a14d-48d0-98ff-91d8b46f26fc", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 46, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-ba147b1783", ContainerID:"fc0c8625af72f0f0e5101181155e6cdad4abf7f2f0a4ea879c64a9d177b85b10", Pod:"coredns-668d6bf9bc-ghldx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.39.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali47329cafc3c", MAC:"12:12:00:95:04:6a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:47:38.398519 containerd[1874]: 2025-07-06 23:47:38.392 [INFO][4931] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fc0c8625af72f0f0e5101181155e6cdad4abf7f2f0a4ea879c64a9d177b85b10" Namespace="kube-system" Pod="coredns-668d6bf9bc-ghldx" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-coredns--668d6bf9bc--ghldx-eth0" Jul 6 23:47:38.402320 containerd[1874]: time="2025-07-06T23:47:38.402278364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tnbxw,Uid:4e241bd5-4cc4-4ff9-83ce-48a34c457465,Namespace:calico-system,Attempt:0,} returns sandbox id \"a48c8b355ffde99c548328dcb4bfca680f4cbf8da321d86eb3ea31ee8c6eaaa1\"" Jul 6 23:47:38.409355 containerd[1874]: time="2025-07-06T23:47:38.409313890Z" level=info msg="connecting to shim a701b9753ea79b360f4126925e8b3260ccfe70bd026f3cd3e4dbdffd20056bd2" address="unix:///run/containerd/s/eb8d8704af6582255b571456ac0d81706731c588d79d4891dd9e893a9ff8288c" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:47:38.432914 systemd[1]: Started cri-containerd-a701b9753ea79b360f4126925e8b3260ccfe70bd026f3cd3e4dbdffd20056bd2.scope - libcontainer container a701b9753ea79b360f4126925e8b3260ccfe70bd026f3cd3e4dbdffd20056bd2. Jul 6 23:47:38.467314 containerd[1874]: time="2025-07-06T23:47:38.467272896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8g56m,Uid:0de0c5ca-daa8-43f4-9a06-b7ef6f8f0c67,Namespace:kube-system,Attempt:0,} returns sandbox id \"a701b9753ea79b360f4126925e8b3260ccfe70bd026f3cd3e4dbdffd20056bd2\"" Jul 6 23:47:38.470229 containerd[1874]: time="2025-07-06T23:47:38.469895993Z" level=info msg="CreateContainer within sandbox \"a701b9753ea79b360f4126925e8b3260ccfe70bd026f3cd3e4dbdffd20056bd2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:47:38.486654 containerd[1874]: time="2025-07-06T23:47:38.486610168Z" level=info msg="connecting to shim fc0c8625af72f0f0e5101181155e6cdad4abf7f2f0a4ea879c64a9d177b85b10" address="unix:///run/containerd/s/2bd3682636869ff34d27c57824f1dd57db064b9c21ff24edee029c4973a53154" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:47:38.505636 containerd[1874]: time="2025-07-06T23:47:38.505567677Z" level=info msg="Container f0b4a70c2a4c50bbceac8f93bb625067c67764fd656909a8e3b1e4b2d2916842: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:47:38.507026 systemd[1]: Started cri-containerd-fc0c8625af72f0f0e5101181155e6cdad4abf7f2f0a4ea879c64a9d177b85b10.scope - libcontainer container fc0c8625af72f0f0e5101181155e6cdad4abf7f2f0a4ea879c64a9d177b85b10. Jul 6 23:47:38.516871 systemd-networkd[1614]: cali5b6d8cfcf3f: Gained IPv6LL Jul 6 23:47:38.527729 containerd[1874]: time="2025-07-06T23:47:38.527630545Z" level=info msg="CreateContainer within sandbox \"a701b9753ea79b360f4126925e8b3260ccfe70bd026f3cd3e4dbdffd20056bd2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f0b4a70c2a4c50bbceac8f93bb625067c67764fd656909a8e3b1e4b2d2916842\"" Jul 6 23:47:38.530342 containerd[1874]: time="2025-07-06T23:47:38.530311325Z" level=info msg="StartContainer for \"f0b4a70c2a4c50bbceac8f93bb625067c67764fd656909a8e3b1e4b2d2916842\"" Jul 6 23:47:38.532818 containerd[1874]: time="2025-07-06T23:47:38.532769090Z" level=info msg="connecting to shim f0b4a70c2a4c50bbceac8f93bb625067c67764fd656909a8e3b1e4b2d2916842" address="unix:///run/containerd/s/eb8d8704af6582255b571456ac0d81706731c588d79d4891dd9e893a9ff8288c" protocol=ttrpc version=3 Jul 6 23:47:38.550346 containerd[1874]: time="2025-07-06T23:47:38.550305575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ghldx,Uid:578e673d-a14d-48d0-98ff-91d8b46f26fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc0c8625af72f0f0e5101181155e6cdad4abf7f2f0a4ea879c64a9d177b85b10\"" Jul 6 23:47:38.550937 systemd[1]: Started cri-containerd-f0b4a70c2a4c50bbceac8f93bb625067c67764fd656909a8e3b1e4b2d2916842.scope - libcontainer container f0b4a70c2a4c50bbceac8f93bb625067c67764fd656909a8e3b1e4b2d2916842. Jul 6 23:47:38.554776 containerd[1874]: time="2025-07-06T23:47:38.554746756Z" level=info msg="CreateContainer within sandbox \"fc0c8625af72f0f0e5101181155e6cdad4abf7f2f0a4ea879c64a9d177b85b10\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:47:38.585637 containerd[1874]: time="2025-07-06T23:47:38.585602256Z" level=info msg="StartContainer for \"f0b4a70c2a4c50bbceac8f93bb625067c67764fd656909a8e3b1e4b2d2916842\" returns successfully" Jul 6 23:47:38.611263 containerd[1874]: time="2025-07-06T23:47:38.611210560Z" level=info msg="Container 7b853e6aff64b0f7b0f3d4fead76d6a4e07a9ed97b5491dcb943d841d8fe781b: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:47:38.633102 containerd[1874]: time="2025-07-06T23:47:38.633063591Z" level=info msg="CreateContainer within sandbox \"fc0c8625af72f0f0e5101181155e6cdad4abf7f2f0a4ea879c64a9d177b85b10\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7b853e6aff64b0f7b0f3d4fead76d6a4e07a9ed97b5491dcb943d841d8fe781b\"" Jul 6 23:47:38.634819 containerd[1874]: time="2025-07-06T23:47:38.634787007Z" level=info msg="StartContainer for \"7b853e6aff64b0f7b0f3d4fead76d6a4e07a9ed97b5491dcb943d841d8fe781b\"" Jul 6 23:47:38.636178 containerd[1874]: time="2025-07-06T23:47:38.636150005Z" level=info msg="connecting to shim 7b853e6aff64b0f7b0f3d4fead76d6a4e07a9ed97b5491dcb943d841d8fe781b" address="unix:///run/containerd/s/2bd3682636869ff34d27c57824f1dd57db064b9c21ff24edee029c4973a53154" protocol=ttrpc version=3 Jul 6 23:47:38.648671 containerd[1874]: time="2025-07-06T23:47:38.648639053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cc56c86c5-l2zwh,Uid:011e81b3-5666-4af9-a767-df1f20092cda,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:47:38.662728 systemd[1]: Started cri-containerd-7b853e6aff64b0f7b0f3d4fead76d6a4e07a9ed97b5491dcb943d841d8fe781b.scope - libcontainer container 7b853e6aff64b0f7b0f3d4fead76d6a4e07a9ed97b5491dcb943d841d8fe781b. Jul 6 23:47:38.712445 containerd[1874]: time="2025-07-06T23:47:38.712397246Z" level=info msg="StartContainer for \"7b853e6aff64b0f7b0f3d4fead76d6a4e07a9ed97b5491dcb943d841d8fe781b\" returns successfully" Jul 6 23:47:38.791441 systemd-networkd[1614]: cali43f419dfbaa: Link UP Jul 6 23:47:38.792240 systemd-networkd[1614]: cali43f419dfbaa: Gained carrier Jul 6 23:47:38.809991 containerd[1874]: 2025-07-06 23:47:38.707 [INFO][5306] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--a--ba147b1783-k8s-calico--apiserver--5cc56c86c5--l2zwh-eth0 calico-apiserver-5cc56c86c5- calico-apiserver 011e81b3-5666-4af9-a767-df1f20092cda 831 0 2025-07-06 23:47:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5cc56c86c5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344.1.1-a-ba147b1783 calico-apiserver-5cc56c86c5-l2zwh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali43f419dfbaa [] [] }} ContainerID="609971c69f9b89fb0134b7dd0a59df09e7fafc68d5a7ca6d897760fdc4eecda2" Namespace="calico-apiserver" Pod="calico-apiserver-5cc56c86c5-l2zwh" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-calico--apiserver--5cc56c86c5--l2zwh-" Jul 6 23:47:38.809991 containerd[1874]: 2025-07-06 23:47:38.708 [INFO][5306] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="609971c69f9b89fb0134b7dd0a59df09e7fafc68d5a7ca6d897760fdc4eecda2" Namespace="calico-apiserver" Pod="calico-apiserver-5cc56c86c5-l2zwh" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-calico--apiserver--5cc56c86c5--l2zwh-eth0" Jul 6 23:47:38.809991 containerd[1874]: 2025-07-06 23:47:38.740 [INFO][5333] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="609971c69f9b89fb0134b7dd0a59df09e7fafc68d5a7ca6d897760fdc4eecda2" HandleID="k8s-pod-network.609971c69f9b89fb0134b7dd0a59df09e7fafc68d5a7ca6d897760fdc4eecda2" Workload="ci--4344.1.1--a--ba147b1783-k8s-calico--apiserver--5cc56c86c5--l2zwh-eth0" Jul 6 23:47:38.809991 containerd[1874]: 2025-07-06 23:47:38.741 [INFO][5333] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="609971c69f9b89fb0134b7dd0a59df09e7fafc68d5a7ca6d897760fdc4eecda2" HandleID="k8s-pod-network.609971c69f9b89fb0134b7dd0a59df09e7fafc68d5a7ca6d897760fdc4eecda2" Workload="ci--4344.1.1--a--ba147b1783-k8s-calico--apiserver--5cc56c86c5--l2zwh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3920), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344.1.1-a-ba147b1783", "pod":"calico-apiserver-5cc56c86c5-l2zwh", "timestamp":"2025-07-06 23:47:38.740371265 +0000 UTC"}, Hostname:"ci-4344.1.1-a-ba147b1783", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:47:38.809991 containerd[1874]: 2025-07-06 23:47:38.741 [INFO][5333] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:47:38.809991 containerd[1874]: 2025-07-06 23:47:38.741 [INFO][5333] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:47:38.809991 containerd[1874]: 2025-07-06 23:47:38.741 [INFO][5333] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-a-ba147b1783' Jul 6 23:47:38.809991 containerd[1874]: 2025-07-06 23:47:38.752 [INFO][5333] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.609971c69f9b89fb0134b7dd0a59df09e7fafc68d5a7ca6d897760fdc4eecda2" host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.809991 containerd[1874]: 2025-07-06 23:47:38.757 [INFO][5333] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.809991 containerd[1874]: 2025-07-06 23:47:38.763 [INFO][5333] ipam/ipam.go 511: Trying affinity for 192.168.39.0/26 host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.809991 containerd[1874]: 2025-07-06 23:47:38.765 [INFO][5333] ipam/ipam.go 158: Attempting to load block cidr=192.168.39.0/26 host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.809991 containerd[1874]: 2025-07-06 23:47:38.767 [INFO][5333] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.39.0/26 host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.809991 containerd[1874]: 2025-07-06 23:47:38.767 [INFO][5333] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.39.0/26 handle="k8s-pod-network.609971c69f9b89fb0134b7dd0a59df09e7fafc68d5a7ca6d897760fdc4eecda2" host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.809991 containerd[1874]: 2025-07-06 23:47:38.769 [INFO][5333] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.609971c69f9b89fb0134b7dd0a59df09e7fafc68d5a7ca6d897760fdc4eecda2 Jul 6 23:47:38.809991 containerd[1874]: 2025-07-06 23:47:38.774 [INFO][5333] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.39.0/26 handle="k8s-pod-network.609971c69f9b89fb0134b7dd0a59df09e7fafc68d5a7ca6d897760fdc4eecda2" host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.809991 containerd[1874]: 2025-07-06 23:47:38.785 [INFO][5333] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.39.8/26] block=192.168.39.0/26 handle="k8s-pod-network.609971c69f9b89fb0134b7dd0a59df09e7fafc68d5a7ca6d897760fdc4eecda2" host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.809991 containerd[1874]: 2025-07-06 23:47:38.785 [INFO][5333] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.39.8/26] handle="k8s-pod-network.609971c69f9b89fb0134b7dd0a59df09e7fafc68d5a7ca6d897760fdc4eecda2" host="ci-4344.1.1-a-ba147b1783" Jul 6 23:47:38.809991 containerd[1874]: 2025-07-06 23:47:38.785 [INFO][5333] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:47:38.809991 containerd[1874]: 2025-07-06 23:47:38.785 [INFO][5333] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.39.8/26] IPv6=[] ContainerID="609971c69f9b89fb0134b7dd0a59df09e7fafc68d5a7ca6d897760fdc4eecda2" HandleID="k8s-pod-network.609971c69f9b89fb0134b7dd0a59df09e7fafc68d5a7ca6d897760fdc4eecda2" Workload="ci--4344.1.1--a--ba147b1783-k8s-calico--apiserver--5cc56c86c5--l2zwh-eth0" Jul 6 23:47:38.811046 containerd[1874]: 2025-07-06 23:47:38.787 [INFO][5306] cni-plugin/k8s.go 418: Populated endpoint ContainerID="609971c69f9b89fb0134b7dd0a59df09e7fafc68d5a7ca6d897760fdc4eecda2" Namespace="calico-apiserver" Pod="calico-apiserver-5cc56c86c5-l2zwh" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-calico--apiserver--5cc56c86c5--l2zwh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--ba147b1783-k8s-calico--apiserver--5cc56c86c5--l2zwh-eth0", GenerateName:"calico-apiserver-5cc56c86c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"011e81b3-5666-4af9-a767-df1f20092cda", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 47, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cc56c86c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-ba147b1783", ContainerID:"", Pod:"calico-apiserver-5cc56c86c5-l2zwh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.39.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali43f419dfbaa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:47:38.811046 containerd[1874]: 2025-07-06 23:47:38.788 [INFO][5306] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.39.8/32] ContainerID="609971c69f9b89fb0134b7dd0a59df09e7fafc68d5a7ca6d897760fdc4eecda2" Namespace="calico-apiserver" Pod="calico-apiserver-5cc56c86c5-l2zwh" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-calico--apiserver--5cc56c86c5--l2zwh-eth0" Jul 6 23:47:38.811046 containerd[1874]: 2025-07-06 23:47:38.788 [INFO][5306] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali43f419dfbaa ContainerID="609971c69f9b89fb0134b7dd0a59df09e7fafc68d5a7ca6d897760fdc4eecda2" Namespace="calico-apiserver" Pod="calico-apiserver-5cc56c86c5-l2zwh" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-calico--apiserver--5cc56c86c5--l2zwh-eth0" Jul 6 23:47:38.811046 containerd[1874]: 2025-07-06 23:47:38.793 [INFO][5306] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="609971c69f9b89fb0134b7dd0a59df09e7fafc68d5a7ca6d897760fdc4eecda2" Namespace="calico-apiserver" Pod="calico-apiserver-5cc56c86c5-l2zwh" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-calico--apiserver--5cc56c86c5--l2zwh-eth0" Jul 6 23:47:38.811046 containerd[1874]: 2025-07-06 23:47:38.793 [INFO][5306] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="609971c69f9b89fb0134b7dd0a59df09e7fafc68d5a7ca6d897760fdc4eecda2" Namespace="calico-apiserver" Pod="calico-apiserver-5cc56c86c5-l2zwh" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-calico--apiserver--5cc56c86c5--l2zwh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--ba147b1783-k8s-calico--apiserver--5cc56c86c5--l2zwh-eth0", GenerateName:"calico-apiserver-5cc56c86c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"011e81b3-5666-4af9-a767-df1f20092cda", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 47, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cc56c86c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-ba147b1783", ContainerID:"609971c69f9b89fb0134b7dd0a59df09e7fafc68d5a7ca6d897760fdc4eecda2", Pod:"calico-apiserver-5cc56c86c5-l2zwh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.39.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali43f419dfbaa", MAC:"5e:f3:02:89:a9:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:47:38.811046 containerd[1874]: 2025-07-06 23:47:38.807 [INFO][5306] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="609971c69f9b89fb0134b7dd0a59df09e7fafc68d5a7ca6d897760fdc4eecda2" Namespace="calico-apiserver" Pod="calico-apiserver-5cc56c86c5-l2zwh" WorkloadEndpoint="ci--4344.1.1--a--ba147b1783-k8s-calico--apiserver--5cc56c86c5--l2zwh-eth0" Jul 6 23:47:38.849298 kubelet[3343]: I0706 23:47:38.849234 3343 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8g56m" podStartSLOduration=40.849218558 podStartE2EDuration="40.849218558s" podCreationTimestamp="2025-07-06 23:46:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:47:38.848875092 +0000 UTC m=+48.288877100" watchObservedRunningTime="2025-07-06 23:47:38.849218558 +0000 UTC m=+48.289220550" Jul 6 23:47:38.895020 kubelet[3343]: I0706 23:47:38.894300 3343 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ghldx" podStartSLOduration=41.894282961 podStartE2EDuration="41.894282961s" podCreationTimestamp="2025-07-06 23:46:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:47:38.892463422 +0000 UTC m=+48.332465454" watchObservedRunningTime="2025-07-06 23:47:38.894282961 +0000 UTC m=+48.334284953" Jul 6 23:47:38.898628 containerd[1874]: time="2025-07-06T23:47:38.898169575Z" level=info msg="connecting to shim 609971c69f9b89fb0134b7dd0a59df09e7fafc68d5a7ca6d897760fdc4eecda2" address="unix:///run/containerd/s/81195c70895e0917eeef238c7b601586e851696ba7146a65bd58280482528959" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:47:38.937024 systemd[1]: Started cri-containerd-609971c69f9b89fb0134b7dd0a59df09e7fafc68d5a7ca6d897760fdc4eecda2.scope - libcontainer container 609971c69f9b89fb0134b7dd0a59df09e7fafc68d5a7ca6d897760fdc4eecda2. Jul 6 23:47:39.047692 containerd[1874]: time="2025-07-06T23:47:39.047571017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cc56c86c5-l2zwh,Uid:011e81b3-5666-4af9-a767-df1f20092cda,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"609971c69f9b89fb0134b7dd0a59df09e7fafc68d5a7ca6d897760fdc4eecda2\"" Jul 6 23:47:39.157648 systemd-networkd[1614]: cali9a59b23d46b: Gained IPv6LL Jul 6 23:47:39.860893 systemd-networkd[1614]: cali47329cafc3c: Gained IPv6LL Jul 6 23:47:39.924923 systemd-networkd[1614]: cali76ad1ffe498: Gained IPv6LL Jul 6 23:47:39.988926 systemd-networkd[1614]: cali3071520e732: Gained IPv6LL Jul 6 23:47:40.052882 systemd-networkd[1614]: cali6b932223d0b: Gained IPv6LL Jul 6 23:47:40.710893 containerd[1874]: time="2025-07-06T23:47:40.710847510Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:40.716866 containerd[1874]: time="2025-07-06T23:47:40.716830282Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 6 23:47:40.723820 containerd[1874]: time="2025-07-06T23:47:40.723774345Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:40.727704 containerd[1874]: time="2025-07-06T23:47:40.727629715Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:40.728175 containerd[1874]: time="2025-07-06T23:47:40.727907578Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 3.604302169s" Jul 6 23:47:40.728175 containerd[1874]: time="2025-07-06T23:47:40.727935203Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 6 23:47:40.729413 containerd[1874]: time="2025-07-06T23:47:40.729385163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 6 23:47:40.744026 containerd[1874]: time="2025-07-06T23:47:40.743992172Z" level=info msg="CreateContainer within sandbox \"b0f04d610fee27e75bb4b144040516d19c6de6214adbd74897ff4f255bc7e86c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 6 23:47:40.772800 containerd[1874]: time="2025-07-06T23:47:40.772199843Z" level=info msg="Container a85aa4439eb6643a520bcc18758bc7b5a5f0010237deb2c85a16a84c2086f106: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:47:40.773353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3120186884.mount: Deactivated successfully. Jul 6 23:47:40.789606 containerd[1874]: time="2025-07-06T23:47:40.789572368Z" level=info msg="CreateContainer within sandbox \"b0f04d610fee27e75bb4b144040516d19c6de6214adbd74897ff4f255bc7e86c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a85aa4439eb6643a520bcc18758bc7b5a5f0010237deb2c85a16a84c2086f106\"" Jul 6 23:47:40.791030 containerd[1874]: time="2025-07-06T23:47:40.791010823Z" level=info msg="StartContainer for \"a85aa4439eb6643a520bcc18758bc7b5a5f0010237deb2c85a16a84c2086f106\"" Jul 6 23:47:40.792797 containerd[1874]: time="2025-07-06T23:47:40.792768504Z" level=info msg="connecting to shim a85aa4439eb6643a520bcc18758bc7b5a5f0010237deb2c85a16a84c2086f106" address="unix:///run/containerd/s/964af447edad02a601bc970116a9b092c6e01db90df092fb3d1093f92f5b699b" protocol=ttrpc version=3 Jul 6 23:47:40.811860 systemd[1]: Started cri-containerd-a85aa4439eb6643a520bcc18758bc7b5a5f0010237deb2c85a16a84c2086f106.scope - libcontainer container a85aa4439eb6643a520bcc18758bc7b5a5f0010237deb2c85a16a84c2086f106. Jul 6 23:47:40.821264 systemd-networkd[1614]: cali43f419dfbaa: Gained IPv6LL Jul 6 23:47:40.846561 containerd[1874]: time="2025-07-06T23:47:40.846520412Z" level=info msg="StartContainer for \"a85aa4439eb6643a520bcc18758bc7b5a5f0010237deb2c85a16a84c2086f106\" returns successfully" Jul 6 23:47:41.863232 kubelet[3343]: I0706 23:47:41.863101 3343 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-666c858d6f-x5lwm" podStartSLOduration=22.257472832 podStartE2EDuration="25.863085701s" podCreationTimestamp="2025-07-06 23:47:16 +0000 UTC" firstStartedPulling="2025-07-06 23:47:37.122944511 +0000 UTC m=+46.562946503" lastFinishedPulling="2025-07-06 23:47:40.72855738 +0000 UTC m=+50.168559372" observedRunningTime="2025-07-06 23:47:41.861854843 +0000 UTC m=+51.301856843" watchObservedRunningTime="2025-07-06 23:47:41.863085701 +0000 UTC m=+51.303087709" Jul 6 23:47:41.877931 containerd[1874]: time="2025-07-06T23:47:41.877899156Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a85aa4439eb6643a520bcc18758bc7b5a5f0010237deb2c85a16a84c2086f106\" id:\"7c87b631a56c9f66821c8944294ed9a6becbeee05436de2b1b1b7935bcd0eefa\" pid:5472 exited_at:{seconds:1751845661 nanos:877283843}" Jul 6 23:47:43.586438 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1673517315.mount: Deactivated successfully. Jul 6 23:47:43.956520 containerd[1874]: time="2025-07-06T23:47:43.956151005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:43.959310 containerd[1874]: time="2025-07-06T23:47:43.959278403Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 6 23:47:43.962822 containerd[1874]: time="2025-07-06T23:47:43.962774155Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:43.969297 containerd[1874]: time="2025-07-06T23:47:43.969252765Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:43.969946 containerd[1874]: time="2025-07-06T23:47:43.969581614Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 3.24016305s" Jul 6 23:47:43.969946 containerd[1874]: time="2025-07-06T23:47:43.969608287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 6 23:47:43.973175 containerd[1874]: time="2025-07-06T23:47:43.973149768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 6 23:47:43.975140 containerd[1874]: time="2025-07-06T23:47:43.974654569Z" level=info msg="CreateContainer within sandbox \"cd44587897ac0960df5892dde44c6d3423dba16372b1857502c28722ed15a547\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 6 23:47:44.001622 containerd[1874]: time="2025-07-06T23:47:44.001590941Z" level=info msg="Container cbe848c48437311fb44e0b77fe738ae8cb49268156c3d28bcc3f3ab265f1fba9: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:47:44.021115 containerd[1874]: time="2025-07-06T23:47:44.021078508Z" level=info msg="CreateContainer within sandbox \"cd44587897ac0960df5892dde44c6d3423dba16372b1857502c28722ed15a547\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"cbe848c48437311fb44e0b77fe738ae8cb49268156c3d28bcc3f3ab265f1fba9\"" Jul 6 23:47:44.021977 containerd[1874]: time="2025-07-06T23:47:44.021953948Z" level=info msg="StartContainer for \"cbe848c48437311fb44e0b77fe738ae8cb49268156c3d28bcc3f3ab265f1fba9\"" Jul 6 23:47:44.023270 containerd[1874]: time="2025-07-06T23:47:44.023237520Z" level=info msg="connecting to shim cbe848c48437311fb44e0b77fe738ae8cb49268156c3d28bcc3f3ab265f1fba9" address="unix:///run/containerd/s/cd6f8dfee4e781659d48374b4572c9d2efa60746e4381d154c244afe2f60a999" protocol=ttrpc version=3 Jul 6 23:47:44.041868 systemd[1]: Started cri-containerd-cbe848c48437311fb44e0b77fe738ae8cb49268156c3d28bcc3f3ab265f1fba9.scope - libcontainer container cbe848c48437311fb44e0b77fe738ae8cb49268156c3d28bcc3f3ab265f1fba9. Jul 6 23:47:44.076418 containerd[1874]: time="2025-07-06T23:47:44.076387947Z" level=info msg="StartContainer for \"cbe848c48437311fb44e0b77fe738ae8cb49268156c3d28bcc3f3ab265f1fba9\" returns successfully" Jul 6 23:47:44.940853 containerd[1874]: time="2025-07-06T23:47:44.940808906Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cbe848c48437311fb44e0b77fe738ae8cb49268156c3d28bcc3f3ab265f1fba9\" id:\"01880095a9efd7bf4d9ceb9fcd9a72f082f2ce9e32314602c2e0be521bb2769f\" pid:5548 exit_status:1 exited_at:{seconds:1751845664 nanos:935603627}" Jul 6 23:47:45.947522 containerd[1874]: time="2025-07-06T23:47:45.947479171Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cbe848c48437311fb44e0b77fe738ae8cb49268156c3d28bcc3f3ab265f1fba9\" id:\"7c098cc60d78ec741b09fff7a445f150c983bccedee7a1df91f134635d4c28f3\" pid:5575 exited_at:{seconds:1751845665 nanos:946354588}" Jul 6 23:47:45.966893 kubelet[3343]: I0706 23:47:45.966836 3343 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-4wg5n" podStartSLOduration=24.225116365 podStartE2EDuration="29.966818838s" podCreationTimestamp="2025-07-06 23:47:16 +0000 UTC" firstStartedPulling="2025-07-06 23:47:38.230653513 +0000 UTC m=+47.670655505" lastFinishedPulling="2025-07-06 23:47:43.972355978 +0000 UTC m=+53.412357978" observedRunningTime="2025-07-06 23:47:44.878253564 +0000 UTC m=+54.318255556" watchObservedRunningTime="2025-07-06 23:47:45.966818838 +0000 UTC m=+55.406820830" Jul 6 23:47:46.129121 containerd[1874]: time="2025-07-06T23:47:46.129033870Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:46.134438 containerd[1874]: time="2025-07-06T23:47:46.134273061Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 6 23:47:46.138773 containerd[1874]: time="2025-07-06T23:47:46.138442136Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:46.143043 containerd[1874]: time="2025-07-06T23:47:46.142990517Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:46.143751 containerd[1874]: time="2025-07-06T23:47:46.143314702Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 2.170135189s" Jul 6 23:47:46.143751 containerd[1874]: time="2025-07-06T23:47:46.143344367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 6 23:47:46.145293 containerd[1874]: time="2025-07-06T23:47:46.145277588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 6 23:47:46.147999 containerd[1874]: time="2025-07-06T23:47:46.147960173Z" level=info msg="CreateContainer within sandbox \"7f2590259e9a02a93d5b28c0e6abab2e41ed90183ac2aeb36e6d3c06b3015fbe\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:47:46.179796 containerd[1874]: time="2025-07-06T23:47:46.177825570Z" level=info msg="Container c9b57c6408f5205d76bc111c0b4c4c91a5ee48b9f4e53d683e823148ee1a3e59: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:47:46.196147 containerd[1874]: time="2025-07-06T23:47:46.195730717Z" level=info msg="CreateContainer within sandbox \"7f2590259e9a02a93d5b28c0e6abab2e41ed90183ac2aeb36e6d3c06b3015fbe\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c9b57c6408f5205d76bc111c0b4c4c91a5ee48b9f4e53d683e823148ee1a3e59\"" Jul 6 23:47:46.196849 containerd[1874]: time="2025-07-06T23:47:46.196726345Z" level=info msg="StartContainer for \"c9b57c6408f5205d76bc111c0b4c4c91a5ee48b9f4e53d683e823148ee1a3e59\"" Jul 6 23:47:46.198128 containerd[1874]: time="2025-07-06T23:47:46.198041333Z" level=info msg="connecting to shim c9b57c6408f5205d76bc111c0b4c4c91a5ee48b9f4e53d683e823148ee1a3e59" address="unix:///run/containerd/s/64b847b0b31c2327d4efc59b9ebad371221f6e5e6a95a1dac1c0b3b5ae339cb7" protocol=ttrpc version=3 Jul 6 23:47:46.223955 systemd[1]: Started cri-containerd-c9b57c6408f5205d76bc111c0b4c4c91a5ee48b9f4e53d683e823148ee1a3e59.scope - libcontainer container c9b57c6408f5205d76bc111c0b4c4c91a5ee48b9f4e53d683e823148ee1a3e59. Jul 6 23:47:46.282670 containerd[1874]: time="2025-07-06T23:47:46.282627704Z" level=info msg="StartContainer for \"c9b57c6408f5205d76bc111c0b4c4c91a5ee48b9f4e53d683e823148ee1a3e59\" returns successfully" Jul 6 23:47:47.866967 kubelet[3343]: I0706 23:47:47.866930 3343 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:47:48.528029 containerd[1874]: time="2025-07-06T23:47:48.527974986Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:48.531439 containerd[1874]: time="2025-07-06T23:47:48.531403499Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 6 23:47:48.536188 containerd[1874]: time="2025-07-06T23:47:48.536146379Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:48.539819 containerd[1874]: time="2025-07-06T23:47:48.539780370Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:48.540168 containerd[1874]: time="2025-07-06T23:47:48.540043458Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 2.394655683s" Jul 6 23:47:48.540168 containerd[1874]: time="2025-07-06T23:47:48.540082867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 6 23:47:48.542176 containerd[1874]: time="2025-07-06T23:47:48.541645976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 6 23:47:48.543186 containerd[1874]: time="2025-07-06T23:47:48.543139170Z" level=info msg="CreateContainer within sandbox \"a48c8b355ffde99c548328dcb4bfca680f4cbf8da321d86eb3ea31ee8c6eaaa1\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 6 23:47:48.589639 containerd[1874]: time="2025-07-06T23:47:48.588719590Z" level=info msg="Container 5152f231229fd6f9a94f7c0744e1804ffa91b67d7a84c44b4452211e01d892b8: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:47:48.620919 containerd[1874]: time="2025-07-06T23:47:48.620818585Z" level=info msg="CreateContainer within sandbox \"a48c8b355ffde99c548328dcb4bfca680f4cbf8da321d86eb3ea31ee8c6eaaa1\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5152f231229fd6f9a94f7c0744e1804ffa91b67d7a84c44b4452211e01d892b8\"" Jul 6 23:47:48.622478 containerd[1874]: time="2025-07-06T23:47:48.621919585Z" level=info msg="StartContainer for \"5152f231229fd6f9a94f7c0744e1804ffa91b67d7a84c44b4452211e01d892b8\"" Jul 6 23:47:48.623508 containerd[1874]: time="2025-07-06T23:47:48.623472965Z" level=info msg="connecting to shim 5152f231229fd6f9a94f7c0744e1804ffa91b67d7a84c44b4452211e01d892b8" address="unix:///run/containerd/s/43ef982d34058b5dffa317f115c938833d1f83bbf1dc91adec07a250219ce80c" protocol=ttrpc version=3 Jul 6 23:47:48.641860 systemd[1]: Started cri-containerd-5152f231229fd6f9a94f7c0744e1804ffa91b67d7a84c44b4452211e01d892b8.scope - libcontainer container 5152f231229fd6f9a94f7c0744e1804ffa91b67d7a84c44b4452211e01d892b8. Jul 6 23:47:48.674908 containerd[1874]: time="2025-07-06T23:47:48.674846470Z" level=info msg="StartContainer for \"5152f231229fd6f9a94f7c0744e1804ffa91b67d7a84c44b4452211e01d892b8\" returns successfully" Jul 6 23:47:48.899632 containerd[1874]: time="2025-07-06T23:47:48.899134267Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:48.902342 containerd[1874]: time="2025-07-06T23:47:48.902313565Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 6 23:47:48.903487 containerd[1874]: time="2025-07-06T23:47:48.903462918Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 361.791958ms" Jul 6 23:47:48.903594 containerd[1874]: time="2025-07-06T23:47:48.903579698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 6 23:47:48.905081 containerd[1874]: time="2025-07-06T23:47:48.905002690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 6 23:47:48.905668 containerd[1874]: time="2025-07-06T23:47:48.905644100Z" level=info msg="CreateContainer within sandbox \"609971c69f9b89fb0134b7dd0a59df09e7fafc68d5a7ca6d897760fdc4eecda2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:47:48.945712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4164172093.mount: Deactivated successfully. Jul 6 23:47:48.945935 containerd[1874]: time="2025-07-06T23:47:48.945844951Z" level=info msg="Container 5636bdc5100e099f7fe54350b076575054708016139fd0d209e5b266a25c745d: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:47:48.964052 containerd[1874]: time="2025-07-06T23:47:48.963959860Z" level=info msg="CreateContainer within sandbox \"609971c69f9b89fb0134b7dd0a59df09e7fafc68d5a7ca6d897760fdc4eecda2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5636bdc5100e099f7fe54350b076575054708016139fd0d209e5b266a25c745d\"" Jul 6 23:47:48.965499 containerd[1874]: time="2025-07-06T23:47:48.964616078Z" level=info msg="StartContainer for \"5636bdc5100e099f7fe54350b076575054708016139fd0d209e5b266a25c745d\"" Jul 6 23:47:48.966543 containerd[1874]: time="2025-07-06T23:47:48.966463835Z" level=info msg="connecting to shim 5636bdc5100e099f7fe54350b076575054708016139fd0d209e5b266a25c745d" address="unix:///run/containerd/s/81195c70895e0917eeef238c7b601586e851696ba7146a65bd58280482528959" protocol=ttrpc version=3 Jul 6 23:47:48.983853 systemd[1]: Started cri-containerd-5636bdc5100e099f7fe54350b076575054708016139fd0d209e5b266a25c745d.scope - libcontainer container 5636bdc5100e099f7fe54350b076575054708016139fd0d209e5b266a25c745d. Jul 6 23:47:49.012520 containerd[1874]: time="2025-07-06T23:47:49.012479667Z" level=info msg="StartContainer for \"5636bdc5100e099f7fe54350b076575054708016139fd0d209e5b266a25c745d\" returns successfully" Jul 6 23:47:49.894317 kubelet[3343]: I0706 23:47:49.894257 3343 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5cc56c86c5-67km5" podStartSLOduration=31.078047962 podStartE2EDuration="38.894241837s" podCreationTimestamp="2025-07-06 23:47:11 +0000 UTC" firstStartedPulling="2025-07-06 23:47:38.32833142 +0000 UTC m=+47.768333412" lastFinishedPulling="2025-07-06 23:47:46.144525295 +0000 UTC m=+55.584527287" observedRunningTime="2025-07-06 23:47:46.881177408 +0000 UTC m=+56.321179416" watchObservedRunningTime="2025-07-06 23:47:49.894241837 +0000 UTC m=+59.334243829" Jul 6 23:47:50.429279 kubelet[3343]: I0706 23:47:50.428893 3343 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5cc56c86c5-l2zwh" podStartSLOduration=29.574583476 podStartE2EDuration="39.428876132s" podCreationTimestamp="2025-07-06 23:47:11 +0000 UTC" firstStartedPulling="2025-07-06 23:47:39.049843129 +0000 UTC m=+48.489845121" lastFinishedPulling="2025-07-06 23:47:48.904135785 +0000 UTC m=+58.344137777" observedRunningTime="2025-07-06 23:47:49.895770697 +0000 UTC m=+59.335772697" watchObservedRunningTime="2025-07-06 23:47:50.428876132 +0000 UTC m=+59.868878132" Jul 6 23:47:50.609504 containerd[1874]: time="2025-07-06T23:47:50.609447746Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:50.617245 containerd[1874]: time="2025-07-06T23:47:50.616296165Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 6 23:47:50.620128 containerd[1874]: time="2025-07-06T23:47:50.620095593Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:50.627304 containerd[1874]: time="2025-07-06T23:47:50.627263166Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:50.628018 containerd[1874]: time="2025-07-06T23:47:50.627952793Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.722923398s" Jul 6 23:47:50.628018 containerd[1874]: time="2025-07-06T23:47:50.628003763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 6 23:47:50.631611 containerd[1874]: time="2025-07-06T23:47:50.631469222Z" level=info msg="CreateContainer within sandbox \"a48c8b355ffde99c548328dcb4bfca680f4cbf8da321d86eb3ea31ee8c6eaaa1\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 6 23:47:50.658429 containerd[1874]: time="2025-07-06T23:47:50.658241289Z" level=info msg="Container a4e19bb49b6310382d18085b09d99d0d92831f73a266b8c85f2c55947bca828c: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:47:50.680861 containerd[1874]: time="2025-07-06T23:47:50.680500796Z" level=info msg="CreateContainer within sandbox \"a48c8b355ffde99c548328dcb4bfca680f4cbf8da321d86eb3ea31ee8c6eaaa1\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a4e19bb49b6310382d18085b09d99d0d92831f73a266b8c85f2c55947bca828c\"" Jul 6 23:47:50.681140 containerd[1874]: time="2025-07-06T23:47:50.681078876Z" level=info msg="StartContainer for \"a4e19bb49b6310382d18085b09d99d0d92831f73a266b8c85f2c55947bca828c\"" Jul 6 23:47:50.683701 containerd[1874]: time="2025-07-06T23:47:50.683543243Z" level=info msg="connecting to shim a4e19bb49b6310382d18085b09d99d0d92831f73a266b8c85f2c55947bca828c" address="unix:///run/containerd/s/43ef982d34058b5dffa317f115c938833d1f83bbf1dc91adec07a250219ce80c" protocol=ttrpc version=3 Jul 6 23:47:50.714937 systemd[1]: Started cri-containerd-a4e19bb49b6310382d18085b09d99d0d92831f73a266b8c85f2c55947bca828c.scope - libcontainer container a4e19bb49b6310382d18085b09d99d0d92831f73a266b8c85f2c55947bca828c. Jul 6 23:47:50.746104 containerd[1874]: time="2025-07-06T23:47:50.746055577Z" level=info msg="StartContainer for \"a4e19bb49b6310382d18085b09d99d0d92831f73a266b8c85f2c55947bca828c\" returns successfully" Jul 6 23:47:50.890340 kubelet[3343]: I0706 23:47:50.890284 3343 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-tnbxw" podStartSLOduration=22.665435796 podStartE2EDuration="34.890271354s" podCreationTimestamp="2025-07-06 23:47:16 +0000 UTC" firstStartedPulling="2025-07-06 23:47:38.404192698 +0000 UTC m=+47.844194690" lastFinishedPulling="2025-07-06 23:47:50.629028256 +0000 UTC m=+60.069030248" observedRunningTime="2025-07-06 23:47:50.88999989 +0000 UTC m=+60.330001882" watchObservedRunningTime="2025-07-06 23:47:50.890271354 +0000 UTC m=+60.330273346" Jul 6 23:47:51.730764 kubelet[3343]: I0706 23:47:51.730693 3343 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 6 23:47:51.732734 kubelet[3343]: I0706 23:47:51.732712 3343 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 6 23:48:02.832245 containerd[1874]: time="2025-07-06T23:48:02.832199866Z" level=info msg="TaskExit event in podsandbox handler container_id:\"386fa5f7593ec7180e97afe1097097b99e65fbac947b252f4b320036b55bc514\" id:\"233105ceb2e45bcb300f46053f5291cc42a464304b3f93ed665f9ab32068b98a\" pid:5768 exited_at:{seconds:1751845682 nanos:831716644}" Jul 6 23:48:09.119942 containerd[1874]: time="2025-07-06T23:48:09.119906523Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cbe848c48437311fb44e0b77fe738ae8cb49268156c3d28bcc3f3ab265f1fba9\" id:\"8d21e3ebe9fe0bdc4188bb6eff45e6075d547fc81f4331eac5f96b8087c25215\" pid:5793 exited_at:{seconds:1751845689 nanos:119456605}" Jul 6 23:48:11.886531 containerd[1874]: time="2025-07-06T23:48:11.886386426Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a85aa4439eb6643a520bcc18758bc7b5a5f0010237deb2c85a16a84c2086f106\" id:\"ecc74b40fcf99c45c9298a544164dd76a1349db05f817904e8f0804b20469511\" pid:5816 exited_at:{seconds:1751845691 nanos:885934404}" Jul 6 23:48:15.400426 containerd[1874]: time="2025-07-06T23:48:15.400386141Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a85aa4439eb6643a520bcc18758bc7b5a5f0010237deb2c85a16a84c2086f106\" id:\"ec8e1d9cdf430b367bcd29edbc58ea44c1e770051b94dce879c5e13e3595acd9\" pid:5845 exited_at:{seconds:1751845695 nanos:400214177}" Jul 6 23:48:15.987127 containerd[1874]: time="2025-07-06T23:48:15.987088072Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cbe848c48437311fb44e0b77fe738ae8cb49268156c3d28bcc3f3ab265f1fba9\" id:\"db0934fdcfb95ff7ec668f81400b95cee0959c01ec78ebe9682afe8a6553512f\" pid:5866 exited_at:{seconds:1751845695 nanos:985659722}" Jul 6 23:48:19.344344 kubelet[3343]: I0706 23:48:19.344293 3343 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:48:32.837307 containerd[1874]: time="2025-07-06T23:48:32.837266690Z" level=info msg="TaskExit event in podsandbox handler container_id:\"386fa5f7593ec7180e97afe1097097b99e65fbac947b252f4b320036b55bc514\" id:\"69f8b53e0e10381afee788f2d66990973f02921bb74a9a177197d6c1eafa9b66\" pid:5896 exited_at:{seconds:1751845712 nanos:836858983}" Jul 6 23:48:41.879457 containerd[1874]: time="2025-07-06T23:48:41.879409997Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a85aa4439eb6643a520bcc18758bc7b5a5f0010237deb2c85a16a84c2086f106\" id:\"8187d354e23abf0072948b6c986c31070144f9095b880e593d43df17908ff14f\" pid:5921 exited_at:{seconds:1751845721 nanos:878719666}" Jul 6 23:48:44.044931 systemd[1]: Started sshd@7-10.200.20.37:22-10.200.16.10:45464.service - OpenSSH per-connection server daemon (10.200.16.10:45464). Jul 6 23:48:44.541937 sshd[5934]: Accepted publickey for core from 10.200.16.10 port 45464 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:48:44.543481 sshd-session[5934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:48:44.547409 systemd-logind[1858]: New session 10 of user core. Jul 6 23:48:44.553860 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 6 23:48:44.963134 sshd[5936]: Connection closed by 10.200.16.10 port 45464 Jul 6 23:48:44.964196 sshd-session[5934]: pam_unix(sshd:session): session closed for user core Jul 6 23:48:44.968845 systemd-logind[1858]: Session 10 logged out. Waiting for processes to exit. Jul 6 23:48:44.968997 systemd[1]: sshd@7-10.200.20.37:22-10.200.16.10:45464.service: Deactivated successfully. Jul 6 23:48:44.971936 systemd[1]: session-10.scope: Deactivated successfully. Jul 6 23:48:44.974408 systemd-logind[1858]: Removed session 10. Jul 6 23:48:45.971500 containerd[1874]: time="2025-07-06T23:48:45.971446931Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cbe848c48437311fb44e0b77fe738ae8cb49268156c3d28bcc3f3ab265f1fba9\" id:\"8b72495dbb6f89f9fba2143e4b7713f020145d25f6798387f3e98b68add45b8b\" pid:5960 exited_at:{seconds:1751845725 nanos:970933541}" Jul 6 23:48:50.060300 systemd[1]: Started sshd@8-10.200.20.37:22-10.200.16.10:42154.service - OpenSSH per-connection server daemon (10.200.16.10:42154). Jul 6 23:48:50.552794 sshd[5972]: Accepted publickey for core from 10.200.16.10 port 42154 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:48:50.554343 sshd-session[5972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:48:50.557876 systemd-logind[1858]: New session 11 of user core. Jul 6 23:48:50.567876 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 6 23:48:50.979474 sshd[5974]: Connection closed by 10.200.16.10 port 42154 Jul 6 23:48:50.979942 sshd-session[5972]: pam_unix(sshd:session): session closed for user core Jul 6 23:48:50.983639 systemd[1]: sshd@8-10.200.20.37:22-10.200.16.10:42154.service: Deactivated successfully. Jul 6 23:48:50.987250 systemd[1]: session-11.scope: Deactivated successfully. Jul 6 23:48:50.989686 systemd-logind[1858]: Session 11 logged out. Waiting for processes to exit. Jul 6 23:48:50.991614 systemd-logind[1858]: Removed session 11. Jul 6 23:48:56.066199 systemd[1]: Started sshd@9-10.200.20.37:22-10.200.16.10:42168.service - OpenSSH per-connection server daemon (10.200.16.10:42168). Jul 6 23:48:56.544794 sshd[5994]: Accepted publickey for core from 10.200.16.10 port 42168 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:48:56.545510 sshd-session[5994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:48:56.548985 systemd-logind[1858]: New session 12 of user core. Jul 6 23:48:56.559897 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 6 23:48:56.924987 sshd[5996]: Connection closed by 10.200.16.10 port 42168 Jul 6 23:48:56.925382 sshd-session[5994]: pam_unix(sshd:session): session closed for user core Jul 6 23:48:56.928552 systemd[1]: sshd@9-10.200.20.37:22-10.200.16.10:42168.service: Deactivated successfully. Jul 6 23:48:56.930429 systemd[1]: session-12.scope: Deactivated successfully. Jul 6 23:48:56.931116 systemd-logind[1858]: Session 12 logged out. Waiting for processes to exit. Jul 6 23:48:56.933012 systemd-logind[1858]: Removed session 12. Jul 6 23:48:57.009108 systemd[1]: Started sshd@10-10.200.20.37:22-10.200.16.10:42180.service - OpenSSH per-connection server daemon (10.200.16.10:42180). Jul 6 23:48:57.496622 sshd[6008]: Accepted publickey for core from 10.200.16.10 port 42180 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:48:57.497639 sshd-session[6008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:48:57.501124 systemd-logind[1858]: New session 13 of user core. Jul 6 23:48:57.509851 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 6 23:48:57.914210 sshd[6010]: Connection closed by 10.200.16.10 port 42180 Jul 6 23:48:57.914869 sshd-session[6008]: pam_unix(sshd:session): session closed for user core Jul 6 23:48:57.917680 systemd[1]: sshd@10-10.200.20.37:22-10.200.16.10:42180.service: Deactivated successfully. Jul 6 23:48:57.919202 systemd[1]: session-13.scope: Deactivated successfully. Jul 6 23:48:57.920874 systemd-logind[1858]: Session 13 logged out. Waiting for processes to exit. Jul 6 23:48:57.922640 systemd-logind[1858]: Removed session 13. Jul 6 23:48:58.005111 systemd[1]: Started sshd@11-10.200.20.37:22-10.200.16.10:42184.service - OpenSSH per-connection server daemon (10.200.16.10:42184). Jul 6 23:48:58.500851 sshd[6020]: Accepted publickey for core from 10.200.16.10 port 42184 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:48:58.501972 sshd-session[6020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:48:58.505584 systemd-logind[1858]: New session 14 of user core. Jul 6 23:48:58.517841 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 6 23:48:58.891057 sshd[6022]: Connection closed by 10.200.16.10 port 42184 Jul 6 23:48:58.891644 sshd-session[6020]: pam_unix(sshd:session): session closed for user core Jul 6 23:48:58.894467 systemd[1]: sshd@11-10.200.20.37:22-10.200.16.10:42184.service: Deactivated successfully. Jul 6 23:48:58.896788 systemd[1]: session-14.scope: Deactivated successfully. Jul 6 23:48:58.898055 systemd-logind[1858]: Session 14 logged out. Waiting for processes to exit. Jul 6 23:48:58.898955 systemd-logind[1858]: Removed session 14. Jul 6 23:49:02.824800 containerd[1874]: time="2025-07-06T23:49:02.824753512Z" level=info msg="TaskExit event in podsandbox handler container_id:\"386fa5f7593ec7180e97afe1097097b99e65fbac947b252f4b320036b55bc514\" id:\"9bbe55fd859936d38b7ca25d428cc269be11f2cd69081a3d5c9425785eedccc1\" pid:6051 exit_status:1 exited_at:{seconds:1751845742 nanos:824442927}" Jul 6 23:49:03.985529 systemd[1]: Started sshd@12-10.200.20.37:22-10.200.16.10:52350.service - OpenSSH per-connection server daemon (10.200.16.10:52350). Jul 6 23:49:04.466241 sshd[6065]: Accepted publickey for core from 10.200.16.10 port 52350 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:49:04.467347 sshd-session[6065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:49:04.470760 systemd-logind[1858]: New session 15 of user core. Jul 6 23:49:04.476855 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 6 23:49:04.845774 sshd[6067]: Connection closed by 10.200.16.10 port 52350 Jul 6 23:49:04.846048 sshd-session[6065]: pam_unix(sshd:session): session closed for user core Jul 6 23:49:04.850658 systemd[1]: sshd@12-10.200.20.37:22-10.200.16.10:52350.service: Deactivated successfully. Jul 6 23:49:04.852889 systemd[1]: session-15.scope: Deactivated successfully. Jul 6 23:49:04.854338 systemd-logind[1858]: Session 15 logged out. Waiting for processes to exit. Jul 6 23:49:04.855206 systemd-logind[1858]: Removed session 15. Jul 6 23:49:09.065130 containerd[1874]: time="2025-07-06T23:49:09.065091019Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cbe848c48437311fb44e0b77fe738ae8cb49268156c3d28bcc3f3ab265f1fba9\" id:\"e2a3edd2a4d9d6b52aa34c5aab25c60eaf32377ba796b13efb8bf9c4ad609a80\" pid:6098 exited_at:{seconds:1751845749 nanos:64708944}" Jul 6 23:49:09.931429 systemd[1]: Started sshd@13-10.200.20.37:22-10.200.16.10:43576.service - OpenSSH per-connection server daemon (10.200.16.10:43576). Jul 6 23:49:10.409931 sshd[6109]: Accepted publickey for core from 10.200.16.10 port 43576 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:49:10.410255 sshd-session[6109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:49:10.413792 systemd-logind[1858]: New session 16 of user core. Jul 6 23:49:10.420839 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 6 23:49:10.797782 sshd[6111]: Connection closed by 10.200.16.10 port 43576 Jul 6 23:49:10.798175 sshd-session[6109]: pam_unix(sshd:session): session closed for user core Jul 6 23:49:10.800998 systemd-logind[1858]: Session 16 logged out. Waiting for processes to exit. Jul 6 23:49:10.801207 systemd[1]: sshd@13-10.200.20.37:22-10.200.16.10:43576.service: Deactivated successfully. Jul 6 23:49:10.802613 systemd[1]: session-16.scope: Deactivated successfully. Jul 6 23:49:10.804930 systemd-logind[1858]: Removed session 16. Jul 6 23:49:11.871377 containerd[1874]: time="2025-07-06T23:49:11.871337230Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a85aa4439eb6643a520bcc18758bc7b5a5f0010237deb2c85a16a84c2086f106\" id:\"00cc40e8205187e916476a3a53c4bb66a5712cc8045505f304bfd85fde6d34ee\" pid:6148 exited_at:{seconds:1751845751 nanos:871048726}" Jul 6 23:49:15.435135 containerd[1874]: time="2025-07-06T23:49:15.435094408Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a85aa4439eb6643a520bcc18758bc7b5a5f0010237deb2c85a16a84c2086f106\" id:\"5094c31473404a5cede2d868cd0dd6150066e7ed34b5580a23f46770ca8d3647\" pid:6170 exited_at:{seconds:1751845755 nanos:434934724}" Jul 6 23:49:15.879109 systemd[1]: Started sshd@14-10.200.20.37:22-10.200.16.10:43592.service - OpenSSH per-connection server daemon (10.200.16.10:43592). Jul 6 23:49:15.914653 containerd[1874]: time="2025-07-06T23:49:15.914617461Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cbe848c48437311fb44e0b77fe738ae8cb49268156c3d28bcc3f3ab265f1fba9\" id:\"4086cf1d0b4bc9163ca326bbd756c7623896a653f29eb17b717c3222ed3cb293\" pid:6193 exited_at:{seconds:1751845755 nanos:914380527}" Jul 6 23:49:16.364774 sshd[6199]: Accepted publickey for core from 10.200.16.10 port 43592 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:49:16.365180 sshd-session[6199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:49:16.368415 systemd-logind[1858]: New session 17 of user core. Jul 6 23:49:16.371866 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 6 23:49:16.767679 sshd[6206]: Connection closed by 10.200.16.10 port 43592 Jul 6 23:49:16.768152 sshd-session[6199]: pam_unix(sshd:session): session closed for user core Jul 6 23:49:16.772867 systemd-logind[1858]: Session 17 logged out. Waiting for processes to exit. Jul 6 23:49:16.773421 systemd[1]: sshd@14-10.200.20.37:22-10.200.16.10:43592.service: Deactivated successfully. Jul 6 23:49:16.775173 systemd[1]: session-17.scope: Deactivated successfully. Jul 6 23:49:16.777222 systemd-logind[1858]: Removed session 17. Jul 6 23:49:16.858316 systemd[1]: Started sshd@15-10.200.20.37:22-10.200.16.10:43594.service - OpenSSH per-connection server daemon (10.200.16.10:43594). Jul 6 23:49:17.355508 sshd[6218]: Accepted publickey for core from 10.200.16.10 port 43594 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:49:17.356705 sshd-session[6218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:49:17.362332 systemd-logind[1858]: New session 18 of user core. Jul 6 23:49:17.368860 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 6 23:49:17.832205 sshd[6220]: Connection closed by 10.200.16.10 port 43594 Jul 6 23:49:17.831720 sshd-session[6218]: pam_unix(sshd:session): session closed for user core Jul 6 23:49:17.835186 systemd[1]: sshd@15-10.200.20.37:22-10.200.16.10:43594.service: Deactivated successfully. Jul 6 23:49:17.836980 systemd[1]: session-18.scope: Deactivated successfully. Jul 6 23:49:17.837845 systemd-logind[1858]: Session 18 logged out. Waiting for processes to exit. Jul 6 23:49:17.839574 systemd-logind[1858]: Removed session 18. Jul 6 23:49:17.916225 systemd[1]: Started sshd@16-10.200.20.37:22-10.200.16.10:43602.service - OpenSSH per-connection server daemon (10.200.16.10:43602). Jul 6 23:49:18.397092 sshd[6230]: Accepted publickey for core from 10.200.16.10 port 43602 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:49:18.398263 sshd-session[6230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:49:18.404227 systemd-logind[1858]: New session 19 of user core. Jul 6 23:49:18.410902 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 6 23:49:19.364813 sshd[6232]: Connection closed by 10.200.16.10 port 43602 Jul 6 23:49:19.365519 sshd-session[6230]: pam_unix(sshd:session): session closed for user core Jul 6 23:49:19.368771 systemd[1]: sshd@16-10.200.20.37:22-10.200.16.10:43602.service: Deactivated successfully. Jul 6 23:49:19.370380 systemd[1]: session-19.scope: Deactivated successfully. Jul 6 23:49:19.371128 systemd-logind[1858]: Session 19 logged out. Waiting for processes to exit. Jul 6 23:49:19.372213 systemd-logind[1858]: Removed session 19. Jul 6 23:49:19.449097 systemd[1]: Started sshd@17-10.200.20.37:22-10.200.16.10:43618.service - OpenSSH per-connection server daemon (10.200.16.10:43618). Jul 6 23:49:19.932311 sshd[6249]: Accepted publickey for core from 10.200.16.10 port 43618 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:49:19.934502 sshd-session[6249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:49:19.939796 systemd-logind[1858]: New session 20 of user core. Jul 6 23:49:19.944902 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 6 23:49:20.441202 sshd[6251]: Connection closed by 10.200.16.10 port 43618 Jul 6 23:49:20.441714 sshd-session[6249]: pam_unix(sshd:session): session closed for user core Jul 6 23:49:20.446119 systemd-logind[1858]: Session 20 logged out. Waiting for processes to exit. Jul 6 23:49:20.447001 systemd[1]: sshd@17-10.200.20.37:22-10.200.16.10:43618.service: Deactivated successfully. Jul 6 23:49:20.449584 systemd[1]: session-20.scope: Deactivated successfully. Jul 6 23:49:20.451248 systemd-logind[1858]: Removed session 20. Jul 6 23:49:20.527921 systemd[1]: Started sshd@18-10.200.20.37:22-10.200.16.10:39816.service - OpenSSH per-connection server daemon (10.200.16.10:39816). Jul 6 23:49:21.005323 sshd[6262]: Accepted publickey for core from 10.200.16.10 port 39816 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:49:21.006427 sshd-session[6262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:49:21.010177 systemd-logind[1858]: New session 21 of user core. Jul 6 23:49:21.028842 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 6 23:49:21.389805 sshd[6264]: Connection closed by 10.200.16.10 port 39816 Jul 6 23:49:21.390521 sshd-session[6262]: pam_unix(sshd:session): session closed for user core Jul 6 23:49:21.394617 systemd[1]: sshd@18-10.200.20.37:22-10.200.16.10:39816.service: Deactivated successfully. Jul 6 23:49:21.397028 systemd[1]: session-21.scope: Deactivated successfully. Jul 6 23:49:21.398897 systemd-logind[1858]: Session 21 logged out. Waiting for processes to exit. Jul 6 23:49:21.400063 systemd-logind[1858]: Removed session 21. Jul 6 23:49:26.480545 systemd[1]: Started sshd@19-10.200.20.37:22-10.200.16.10:39822.service - OpenSSH per-connection server daemon (10.200.16.10:39822). Jul 6 23:49:26.979363 sshd[6278]: Accepted publickey for core from 10.200.16.10 port 39822 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:49:26.980394 sshd-session[6278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:49:26.984227 systemd-logind[1858]: New session 22 of user core. Jul 6 23:49:26.991848 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 6 23:49:27.369830 sshd[6280]: Connection closed by 10.200.16.10 port 39822 Jul 6 23:49:27.370369 sshd-session[6278]: pam_unix(sshd:session): session closed for user core Jul 6 23:49:27.373678 systemd-logind[1858]: Session 22 logged out. Waiting for processes to exit. Jul 6 23:49:27.373829 systemd[1]: sshd@19-10.200.20.37:22-10.200.16.10:39822.service: Deactivated successfully. Jul 6 23:49:27.375627 systemd[1]: session-22.scope: Deactivated successfully. Jul 6 23:49:27.377401 systemd-logind[1858]: Removed session 22. Jul 6 23:49:32.457373 systemd[1]: Started sshd@20-10.200.20.37:22-10.200.16.10:38386.service - OpenSSH per-connection server daemon (10.200.16.10:38386). Jul 6 23:49:32.823342 containerd[1874]: time="2025-07-06T23:49:32.823304133Z" level=info msg="TaskExit event in podsandbox handler container_id:\"386fa5f7593ec7180e97afe1097097b99e65fbac947b252f4b320036b55bc514\" id:\"9025f6fed41cf71c4fffd4687db7aafadca1275361771c879d24b70b5488c8db\" pid:6309 exited_at:{seconds:1751845772 nanos:823085646}" Jul 6 23:49:32.933985 sshd[6294]: Accepted publickey for core from 10.200.16.10 port 38386 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:49:32.935878 sshd-session[6294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:49:32.939763 systemd-logind[1858]: New session 23 of user core. Jul 6 23:49:32.945865 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 6 23:49:33.313527 sshd[6319]: Connection closed by 10.200.16.10 port 38386 Jul 6 23:49:33.313094 sshd-session[6294]: pam_unix(sshd:session): session closed for user core Jul 6 23:49:33.315574 systemd-logind[1858]: Session 23 logged out. Waiting for processes to exit. Jul 6 23:49:33.316592 systemd[1]: sshd@20-10.200.20.37:22-10.200.16.10:38386.service: Deactivated successfully. Jul 6 23:49:33.318619 systemd[1]: session-23.scope: Deactivated successfully. Jul 6 23:49:33.320620 systemd-logind[1858]: Removed session 23. Jul 6 23:49:38.399527 systemd[1]: Started sshd@21-10.200.20.37:22-10.200.16.10:38398.service - OpenSSH per-connection server daemon (10.200.16.10:38398). Jul 6 23:49:38.888445 sshd[6331]: Accepted publickey for core from 10.200.16.10 port 38398 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:49:38.890612 sshd-session[6331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:49:38.895128 systemd-logind[1858]: New session 24 of user core. Jul 6 23:49:38.899870 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 6 23:49:39.293781 sshd[6334]: Connection closed by 10.200.16.10 port 38398 Jul 6 23:49:39.294290 sshd-session[6331]: pam_unix(sshd:session): session closed for user core Jul 6 23:49:39.297247 systemd[1]: sshd@21-10.200.20.37:22-10.200.16.10:38398.service: Deactivated successfully. Jul 6 23:49:39.298904 systemd[1]: session-24.scope: Deactivated successfully. Jul 6 23:49:39.299603 systemd-logind[1858]: Session 24 logged out. Waiting for processes to exit. Jul 6 23:49:39.301352 systemd-logind[1858]: Removed session 24. Jul 6 23:49:41.876478 containerd[1874]: time="2025-07-06T23:49:41.876439779Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a85aa4439eb6643a520bcc18758bc7b5a5f0010237deb2c85a16a84c2086f106\" id:\"a4ccdbe1c72896f0001e72e4372c46eaca5ef34513fbd0734103e77d2a1b6f78\" pid:6357 exited_at:{seconds:1751845781 nanos:876178773}" Jul 6 23:49:44.380431 systemd[1]: Started sshd@22-10.200.20.37:22-10.200.16.10:44854.service - OpenSSH per-connection server daemon (10.200.16.10:44854). Jul 6 23:49:44.861113 sshd[6367]: Accepted publickey for core from 10.200.16.10 port 44854 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:49:44.862970 sshd-session[6367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:49:44.866713 systemd-logind[1858]: New session 25 of user core. Jul 6 23:49:44.874028 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 6 23:49:45.239594 sshd[6369]: Connection closed by 10.200.16.10 port 44854 Jul 6 23:49:45.239515 sshd-session[6367]: pam_unix(sshd:session): session closed for user core Jul 6 23:49:45.242636 systemd[1]: sshd@22-10.200.20.37:22-10.200.16.10:44854.service: Deactivated successfully. Jul 6 23:49:45.244259 systemd[1]: session-25.scope: Deactivated successfully. Jul 6 23:49:45.245524 systemd-logind[1858]: Session 25 logged out. Waiting for processes to exit. Jul 6 23:49:45.246464 systemd-logind[1858]: Removed session 25. Jul 6 23:49:46.007277 containerd[1874]: time="2025-07-06T23:49:46.007110139Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cbe848c48437311fb44e0b77fe738ae8cb49268156c3d28bcc3f3ab265f1fba9\" id:\"f5344c919072b9953c01eca769f73121281d33616b9b688baef484b540fb2508\" pid:6391 exited_at:{seconds:1751845786 nanos:6823067}" Jul 6 23:49:50.330280 systemd[1]: Started sshd@23-10.200.20.37:22-10.200.16.10:45750.service - OpenSSH per-connection server daemon (10.200.16.10:45750). Jul 6 23:49:50.823184 sshd[6403]: Accepted publickey for core from 10.200.16.10 port 45750 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:49:50.824650 sshd-session[6403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:49:50.830461 systemd-logind[1858]: New session 26 of user core. Jul 6 23:49:50.833877 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 6 23:49:51.217414 sshd[6407]: Connection closed by 10.200.16.10 port 45750 Jul 6 23:49:51.219184 sshd-session[6403]: pam_unix(sshd:session): session closed for user core Jul 6 23:49:51.223781 systemd-logind[1858]: Session 26 logged out. Waiting for processes to exit. Jul 6 23:49:51.223968 systemd[1]: sshd@23-10.200.20.37:22-10.200.16.10:45750.service: Deactivated successfully. Jul 6 23:49:51.226327 systemd[1]: session-26.scope: Deactivated successfully. Jul 6 23:49:51.228426 systemd-logind[1858]: Removed session 26. Jul 6 23:49:56.309931 systemd[1]: Started sshd@24-10.200.20.37:22-10.200.16.10:45752.service - OpenSSH per-connection server daemon (10.200.16.10:45752). Jul 6 23:49:56.803444 sshd[6419]: Accepted publickey for core from 10.200.16.10 port 45752 ssh2: RSA SHA256:0/AHONPd/Cla0u01jeKf+n9bVAD+ttQ1+M75e1nZbX8 Jul 6 23:49:56.804653 sshd-session[6419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:49:56.808751 systemd-logind[1858]: New session 27 of user core. Jul 6 23:49:56.811856 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 6 23:49:57.191866 sshd[6421]: Connection closed by 10.200.16.10 port 45752 Jul 6 23:49:57.191693 sshd-session[6419]: pam_unix(sshd:session): session closed for user core Jul 6 23:49:57.195152 systemd-logind[1858]: Session 27 logged out. Waiting for processes to exit. Jul 6 23:49:57.195292 systemd[1]: sshd@24-10.200.20.37:22-10.200.16.10:45752.service: Deactivated successfully. Jul 6 23:49:57.197818 systemd[1]: session-27.scope: Deactivated successfully. Jul 6 23:49:57.199608 systemd-logind[1858]: Removed session 27.