Jul 15 04:38:52.085676 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Jul 15 04:38:52.085693 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue Jul 15 03:28:41 -00 2025 Jul 15 04:38:52.085699 kernel: KASLR enabled Jul 15 04:38:52.085704 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jul 15 04:38:52.085708 kernel: printk: legacy bootconsole [pl11] enabled Jul 15 04:38:52.085712 kernel: efi: EFI v2.7 by EDK II Jul 15 04:38:52.085717 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20d018 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Jul 15 04:38:52.085721 kernel: random: crng init done Jul 15 04:38:52.085725 kernel: secureboot: Secure boot disabled Jul 15 04:38:52.085729 kernel: ACPI: Early table checksum verification disabled Jul 15 04:38:52.085733 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jul 15 04:38:52.085736 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 04:38:52.085740 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 04:38:52.085745 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jul 15 04:38:52.085750 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 04:38:52.085754 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 04:38:52.085759 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 04:38:52.085763 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 04:38:52.085768 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 04:38:52.085772 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 04:38:52.085776 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jul 15 04:38:52.085780 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 04:38:52.085784 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jul 15 04:38:52.085788 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 15 04:38:52.085792 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jul 15 04:38:52.085796 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Jul 15 04:38:52.085801 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Jul 15 04:38:52.085805 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jul 15 04:38:52.085809 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jul 15 04:38:52.085814 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jul 15 04:38:52.085818 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jul 15 04:38:52.085822 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jul 15 04:38:52.085826 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jul 15 04:38:52.085830 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jul 15 04:38:52.085835 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jul 15 04:38:52.085839 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jul 15 04:38:52.085843 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Jul 15 04:38:52.085847 kernel: NODE_DATA(0) allocated [mem 0x1bf7fda00-0x1bf804fff] Jul 15 04:38:52.085851 kernel: Zone ranges: Jul 15 04:38:52.085856 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jul 15 04:38:52.085862 kernel: DMA32 empty Jul 15 04:38:52.085867 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jul 15 04:38:52.085871 kernel: Device empty Jul 15 04:38:52.085875 kernel: Movable zone start for each node Jul 15 04:38:52.085880 kernel: Early memory node ranges Jul 15 04:38:52.085885 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jul 15 04:38:52.085889 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Jul 15 04:38:52.085894 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Jul 15 04:38:52.085898 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Jul 15 04:38:52.085903 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jul 15 04:38:52.085907 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jul 15 04:38:52.085911 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jul 15 04:38:52.085916 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jul 15 04:38:52.085920 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jul 15 04:38:52.085924 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jul 15 04:38:52.085929 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jul 15 04:38:52.085933 kernel: cma: Reserved 16 MiB at 0x000000003d400000 on node -1 Jul 15 04:38:52.085938 kernel: psci: probing for conduit method from ACPI. Jul 15 04:38:52.085943 kernel: psci: PSCIv1.1 detected in firmware. Jul 15 04:38:52.085947 kernel: psci: Using standard PSCI v0.2 function IDs Jul 15 04:38:52.085951 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 15 04:38:52.085955 kernel: psci: SMC Calling Convention v1.4 Jul 15 04:38:52.085960 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jul 15 04:38:52.085964 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jul 15 04:38:52.085969 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 15 04:38:52.085973 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 15 04:38:52.085977 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 15 04:38:52.085982 kernel: Detected PIPT I-cache on CPU0 Jul 15 04:38:52.085987 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Jul 15 04:38:52.085991 kernel: CPU features: detected: GIC system register CPU interface Jul 15 04:38:52.085996 kernel: CPU features: detected: Spectre-v4 Jul 15 04:38:52.086000 kernel: CPU features: detected: Spectre-BHB Jul 15 04:38:52.086004 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 15 04:38:52.086009 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 15 04:38:52.086013 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Jul 15 04:38:52.086017 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 15 04:38:52.086022 kernel: alternatives: applying boot alternatives Jul 15 04:38:52.086027 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=71133d47dc7355ed63f3db64861b54679726ebf08c2975c3bf327e76b39a3acd Jul 15 04:38:52.086032 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 04:38:52.086037 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 15 04:38:52.086041 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 15 04:38:52.086046 kernel: Fallback order for Node 0: 0 Jul 15 04:38:52.086050 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Jul 15 04:38:52.086054 kernel: Policy zone: Normal Jul 15 04:38:52.086059 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 04:38:52.086063 kernel: software IO TLB: area num 2. Jul 15 04:38:52.086068 kernel: software IO TLB: mapped [mem 0x0000000036210000-0x000000003a210000] (64MB) Jul 15 04:38:52.086072 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 15 04:38:52.086076 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 15 04:38:52.086081 kernel: rcu: RCU event tracing is enabled. Jul 15 04:38:52.086087 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 15 04:38:52.086091 kernel: Trampoline variant of Tasks RCU enabled. Jul 15 04:38:52.086096 kernel: Tracing variant of Tasks RCU enabled. Jul 15 04:38:52.086100 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 04:38:52.086104 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 15 04:38:52.086109 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 04:38:52.086113 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 04:38:52.086118 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 15 04:38:52.086122 kernel: GICv3: 960 SPIs implemented Jul 15 04:38:52.086126 kernel: GICv3: 0 Extended SPIs implemented Jul 15 04:38:52.086131 kernel: Root IRQ handler: gic_handle_irq Jul 15 04:38:52.086135 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jul 15 04:38:52.086140 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Jul 15 04:38:52.086145 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jul 15 04:38:52.086149 kernel: ITS: No ITS available, not enabling LPIs Jul 15 04:38:52.086154 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 15 04:38:52.086158 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Jul 15 04:38:52.086162 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 15 04:38:52.086167 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Jul 15 04:38:52.086171 kernel: Console: colour dummy device 80x25 Jul 15 04:38:52.086176 kernel: printk: legacy console [tty1] enabled Jul 15 04:38:52.086180 kernel: ACPI: Core revision 20240827 Jul 15 04:38:52.086185 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Jul 15 04:38:52.086190 kernel: pid_max: default: 32768 minimum: 301 Jul 15 04:38:52.086195 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 15 04:38:52.086199 kernel: landlock: Up and running. Jul 15 04:38:52.086204 kernel: SELinux: Initializing. Jul 15 04:38:52.088262 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 04:38:52.088281 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 04:38:52.088288 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x1a0000e, misc 0x31e1 Jul 15 04:38:52.088294 kernel: Hyper-V: Host Build 10.0.26100.1261-1-0 Jul 15 04:38:52.088299 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 15 04:38:52.088304 kernel: rcu: Hierarchical SRCU implementation. Jul 15 04:38:52.088309 kernel: rcu: Max phase no-delay instances is 400. Jul 15 04:38:52.088315 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 15 04:38:52.088320 kernel: Remapping and enabling EFI services. Jul 15 04:38:52.088325 kernel: smp: Bringing up secondary CPUs ... Jul 15 04:38:52.088329 kernel: Detected PIPT I-cache on CPU1 Jul 15 04:38:52.088334 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jul 15 04:38:52.088340 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Jul 15 04:38:52.088345 kernel: smp: Brought up 1 node, 2 CPUs Jul 15 04:38:52.088350 kernel: SMP: Total of 2 processors activated. Jul 15 04:38:52.088354 kernel: CPU: All CPU(s) started at EL1 Jul 15 04:38:52.088359 kernel: CPU features: detected: 32-bit EL0 Support Jul 15 04:38:52.088364 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jul 15 04:38:52.088369 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 15 04:38:52.088374 kernel: CPU features: detected: Common not Private translations Jul 15 04:38:52.088379 kernel: CPU features: detected: CRC32 instructions Jul 15 04:38:52.088385 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Jul 15 04:38:52.088389 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 15 04:38:52.088394 kernel: CPU features: detected: LSE atomic instructions Jul 15 04:38:52.088399 kernel: CPU features: detected: Privileged Access Never Jul 15 04:38:52.088404 kernel: CPU features: detected: Speculation barrier (SB) Jul 15 04:38:52.088409 kernel: CPU features: detected: TLB range maintenance instructions Jul 15 04:38:52.088413 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 15 04:38:52.088418 kernel: CPU features: detected: Scalable Vector Extension Jul 15 04:38:52.088423 kernel: alternatives: applying system-wide alternatives Jul 15 04:38:52.088429 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jul 15 04:38:52.088433 kernel: SVE: maximum available vector length 16 bytes per vector Jul 15 04:38:52.088438 kernel: SVE: default vector length 16 bytes per vector Jul 15 04:38:52.088444 kernel: Memory: 3959156K/4194160K available (11136K kernel code, 2436K rwdata, 9056K rodata, 39424K init, 1038K bss, 213816K reserved, 16384K cma-reserved) Jul 15 04:38:52.088448 kernel: devtmpfs: initialized Jul 15 04:38:52.088453 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 04:38:52.088458 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 15 04:38:52.088463 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 15 04:38:52.088468 kernel: 0 pages in range for non-PLT usage Jul 15 04:38:52.088473 kernel: 508448 pages in range for PLT usage Jul 15 04:38:52.088478 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 04:38:52.088483 kernel: SMBIOS 3.1.0 present. Jul 15 04:38:52.088488 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jul 15 04:38:52.088493 kernel: DMI: Memory slots populated: 2/2 Jul 15 04:38:52.088498 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 04:38:52.088502 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 15 04:38:52.088507 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 15 04:38:52.088512 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 15 04:38:52.088518 kernel: audit: initializing netlink subsys (disabled) Jul 15 04:38:52.088523 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Jul 15 04:38:52.088528 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 04:38:52.088533 kernel: cpuidle: using governor menu Jul 15 04:38:52.088537 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 15 04:38:52.088542 kernel: ASID allocator initialised with 32768 entries Jul 15 04:38:52.088547 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 04:38:52.088552 kernel: Serial: AMBA PL011 UART driver Jul 15 04:38:52.088557 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 04:38:52.088562 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 15 04:38:52.088567 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 15 04:38:52.088572 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 15 04:38:52.088577 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 04:38:52.088582 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 15 04:38:52.088586 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 15 04:38:52.088591 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 15 04:38:52.088596 kernel: ACPI: Added _OSI(Module Device) Jul 15 04:38:52.088601 kernel: ACPI: Added _OSI(Processor Device) Jul 15 04:38:52.088606 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 04:38:52.088611 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 15 04:38:52.088616 kernel: ACPI: Interpreter enabled Jul 15 04:38:52.088620 kernel: ACPI: Using GIC for interrupt routing Jul 15 04:38:52.088625 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jul 15 04:38:52.088630 kernel: printk: legacy console [ttyAMA0] enabled Jul 15 04:38:52.088635 kernel: printk: legacy bootconsole [pl11] disabled Jul 15 04:38:52.088639 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jul 15 04:38:52.088644 kernel: ACPI: CPU0 has been hot-added Jul 15 04:38:52.088651 kernel: ACPI: CPU1 has been hot-added Jul 15 04:38:52.088656 kernel: iommu: Default domain type: Translated Jul 15 04:38:52.088661 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 15 04:38:52.088666 kernel: efivars: Registered efivars operations Jul 15 04:38:52.088671 kernel: vgaarb: loaded Jul 15 04:38:52.088676 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 15 04:38:52.088680 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 04:38:52.088685 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 04:38:52.088690 kernel: pnp: PnP ACPI init Jul 15 04:38:52.088695 kernel: pnp: PnP ACPI: found 0 devices Jul 15 04:38:52.088700 kernel: NET: Registered PF_INET protocol family Jul 15 04:38:52.088705 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 15 04:38:52.088710 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 15 04:38:52.088715 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 04:38:52.088720 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 15 04:38:52.088724 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 15 04:38:52.088729 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 15 04:38:52.088734 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 04:38:52.088739 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 04:38:52.088744 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 04:38:52.088749 kernel: PCI: CLS 0 bytes, default 64 Jul 15 04:38:52.088754 kernel: kvm [1]: HYP mode not available Jul 15 04:38:52.088758 kernel: Initialise system trusted keyrings Jul 15 04:38:52.088763 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 15 04:38:52.088768 kernel: Key type asymmetric registered Jul 15 04:38:52.088773 kernel: Asymmetric key parser 'x509' registered Jul 15 04:38:52.088778 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 15 04:38:52.088783 kernel: io scheduler mq-deadline registered Jul 15 04:38:52.088788 kernel: io scheduler kyber registered Jul 15 04:38:52.088793 kernel: io scheduler bfq registered Jul 15 04:38:52.088798 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 04:38:52.088802 kernel: thunder_xcv, ver 1.0 Jul 15 04:38:52.088807 kernel: thunder_bgx, ver 1.0 Jul 15 04:38:52.088812 kernel: nicpf, ver 1.0 Jul 15 04:38:52.088817 kernel: nicvf, ver 1.0 Jul 15 04:38:52.088939 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 15 04:38:52.088992 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-15T04:38:51 UTC (1752554331) Jul 15 04:38:52.088999 kernel: efifb: probing for efifb Jul 15 04:38:52.089004 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 15 04:38:52.089009 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 15 04:38:52.089014 kernel: efifb: scrolling: redraw Jul 15 04:38:52.089018 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 15 04:38:52.089023 kernel: Console: switching to colour frame buffer device 128x48 Jul 15 04:38:52.089028 kernel: fb0: EFI VGA frame buffer device Jul 15 04:38:52.089034 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jul 15 04:38:52.089039 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 15 04:38:52.089043 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 15 04:38:52.089048 kernel: NET: Registered PF_INET6 protocol family Jul 15 04:38:52.089053 kernel: watchdog: NMI not fully supported Jul 15 04:38:52.089058 kernel: watchdog: Hard watchdog permanently disabled Jul 15 04:38:52.089063 kernel: Segment Routing with IPv6 Jul 15 04:38:52.089067 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 04:38:52.089072 kernel: NET: Registered PF_PACKET protocol family Jul 15 04:38:52.089078 kernel: Key type dns_resolver registered Jul 15 04:38:52.089082 kernel: registered taskstats version 1 Jul 15 04:38:52.089087 kernel: Loading compiled-in X.509 certificates Jul 15 04:38:52.089092 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: b5c59c413839929aea5bd4b52ae6eaff0e245cd2' Jul 15 04:38:52.089097 kernel: Demotion targets for Node 0: null Jul 15 04:38:52.089101 kernel: Key type .fscrypt registered Jul 15 04:38:52.089106 kernel: Key type fscrypt-provisioning registered Jul 15 04:38:52.089111 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 15 04:38:52.089116 kernel: ima: Allocated hash algorithm: sha1 Jul 15 04:38:52.089121 kernel: ima: No architecture policies found Jul 15 04:38:52.089126 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 15 04:38:52.089131 kernel: clk: Disabling unused clocks Jul 15 04:38:52.089135 kernel: PM: genpd: Disabling unused power domains Jul 15 04:38:52.089140 kernel: Warning: unable to open an initial console. Jul 15 04:38:52.089145 kernel: Freeing unused kernel memory: 39424K Jul 15 04:38:52.089150 kernel: Run /init as init process Jul 15 04:38:52.089154 kernel: with arguments: Jul 15 04:38:52.089159 kernel: /init Jul 15 04:38:52.089164 kernel: with environment: Jul 15 04:38:52.089169 kernel: HOME=/ Jul 15 04:38:52.089174 kernel: TERM=linux Jul 15 04:38:52.089178 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 04:38:52.089184 systemd[1]: Successfully made /usr/ read-only. Jul 15 04:38:52.089191 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 04:38:52.089197 systemd[1]: Detected virtualization microsoft. Jul 15 04:38:52.089202 systemd[1]: Detected architecture arm64. Jul 15 04:38:52.089217 systemd[1]: Running in initrd. Jul 15 04:38:52.089222 systemd[1]: No hostname configured, using default hostname. Jul 15 04:38:52.089228 systemd[1]: Hostname set to . Jul 15 04:38:52.089233 systemd[1]: Initializing machine ID from random generator. Jul 15 04:38:52.089238 systemd[1]: Queued start job for default target initrd.target. Jul 15 04:38:52.089244 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 04:38:52.089249 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 04:38:52.089255 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 15 04:38:52.089262 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 04:38:52.089267 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 15 04:38:52.089273 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 15 04:38:52.089279 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 15 04:38:52.089284 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 15 04:38:52.089289 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 04:38:52.089295 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 04:38:52.089300 systemd[1]: Reached target paths.target - Path Units. Jul 15 04:38:52.089306 systemd[1]: Reached target slices.target - Slice Units. Jul 15 04:38:52.089311 systemd[1]: Reached target swap.target - Swaps. Jul 15 04:38:52.089319 systemd[1]: Reached target timers.target - Timer Units. Jul 15 04:38:52.089324 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 04:38:52.089329 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 04:38:52.089334 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 15 04:38:52.089339 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 15 04:38:52.089345 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 04:38:52.089351 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 04:38:52.089356 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 04:38:52.089361 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 04:38:52.089366 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 15 04:38:52.089371 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 04:38:52.089376 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 15 04:38:52.089382 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 15 04:38:52.089388 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 04:38:52.089393 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 04:38:52.089398 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 04:38:52.089416 systemd-journald[225]: Collecting audit messages is disabled. Jul 15 04:38:52.089430 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 04:38:52.089437 systemd-journald[225]: Journal started Jul 15 04:38:52.089451 systemd-journald[225]: Runtime Journal (/run/log/journal/d1ebbb2221224f56a66a4f4498b451d1) is 8M, max 78.5M, 70.5M free. Jul 15 04:38:52.093629 systemd-modules-load[227]: Inserted module 'overlay' Jul 15 04:38:52.105451 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 04:38:52.114218 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 04:38:52.120085 systemd-modules-load[227]: Inserted module 'br_netfilter' Jul 15 04:38:52.124930 kernel: Bridge firewalling registered Jul 15 04:38:52.121174 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 15 04:38:52.131076 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 04:38:52.139613 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 04:38:52.151320 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 04:38:52.155942 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:38:52.167138 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 15 04:38:52.191477 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 04:38:52.203112 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 04:38:52.218412 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 04:38:52.233463 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 04:38:52.233698 systemd-tmpfiles[247]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 15 04:38:52.242563 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 04:38:52.248812 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 04:38:52.272746 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 04:38:52.277146 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 04:38:52.297689 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 04:38:52.303157 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 15 04:38:52.322818 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 04:38:52.342384 systemd-resolved[261]: Positive Trust Anchors: Jul 15 04:38:52.342398 systemd-resolved[261]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 04:38:52.342417 systemd-resolved[261]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 04:38:52.393814 dracut-cmdline[265]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=71133d47dc7355ed63f3db64861b54679726ebf08c2975c3bf327e76b39a3acd Jul 15 04:38:52.344450 systemd-resolved[261]: Defaulting to hostname 'linux'. Jul 15 04:38:52.346050 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 04:38:52.351708 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 04:38:52.452226 kernel: SCSI subsystem initialized Jul 15 04:38:52.457227 kernel: Loading iSCSI transport class v2.0-870. Jul 15 04:38:52.465232 kernel: iscsi: registered transport (tcp) Jul 15 04:38:52.478793 kernel: iscsi: registered transport (qla4xxx) Jul 15 04:38:52.478828 kernel: QLogic iSCSI HBA Driver Jul 15 04:38:52.492726 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 04:38:52.513697 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 04:38:52.520288 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 04:38:52.567239 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 15 04:38:52.572185 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 15 04:38:52.635223 kernel: raid6: neonx8 gen() 18558 MB/s Jul 15 04:38:52.654214 kernel: raid6: neonx4 gen() 18563 MB/s Jul 15 04:38:52.673214 kernel: raid6: neonx2 gen() 17074 MB/s Jul 15 04:38:52.693215 kernel: raid6: neonx1 gen() 15060 MB/s Jul 15 04:38:52.712213 kernel: raid6: int64x8 gen() 10536 MB/s Jul 15 04:38:52.734225 kernel: raid6: int64x4 gen() 10598 MB/s Jul 15 04:38:52.751217 kernel: raid6: int64x2 gen() 8979 MB/s Jul 15 04:38:52.772808 kernel: raid6: int64x1 gen() 7016 MB/s Jul 15 04:38:52.772815 kernel: raid6: using algorithm neonx4 gen() 18563 MB/s Jul 15 04:38:52.795147 kernel: raid6: .... xor() 15150 MB/s, rmw enabled Jul 15 04:38:52.795154 kernel: raid6: using neon recovery algorithm Jul 15 04:38:52.803452 kernel: xor: measuring software checksum speed Jul 15 04:38:52.803460 kernel: 8regs : 28569 MB/sec Jul 15 04:38:52.807297 kernel: 32regs : 28785 MB/sec Jul 15 04:38:52.810009 kernel: arm64_neon : 37607 MB/sec Jul 15 04:38:52.813175 kernel: xor: using function: arm64_neon (37607 MB/sec) Jul 15 04:38:52.852229 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 15 04:38:52.857237 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 15 04:38:52.866801 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 04:38:52.895821 systemd-udevd[474]: Using default interface naming scheme 'v255'. Jul 15 04:38:52.899932 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 04:38:52.913085 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 15 04:38:52.939509 dracut-pre-trigger[482]: rd.md=0: removing MD RAID activation Jul 15 04:38:52.961273 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 04:38:52.967945 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 04:38:53.022928 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 04:38:53.035802 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 15 04:38:53.095238 kernel: hv_vmbus: Vmbus version:5.3 Jul 15 04:38:53.095444 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 04:38:53.099406 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:38:53.124158 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 15 04:38:53.124177 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 15 04:38:53.124185 kernel: hv_vmbus: registering driver hv_storvsc Jul 15 04:38:53.124193 kernel: hv_vmbus: registering driver hid_hyperv Jul 15 04:38:53.122675 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 04:38:53.138248 kernel: scsi host0: storvsc_host_t Jul 15 04:38:53.138288 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 15 04:38:53.138295 kernel: scsi host1: storvsc_host_t Jul 15 04:38:53.146258 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 15 04:38:53.146302 kernel: hv_vmbus: registering driver hv_netvsc Jul 15 04:38:53.165404 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 15 04:38:53.165445 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 15 04:38:53.165482 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 15 04:38:53.177683 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jul 15 04:38:53.177751 kernel: PTP clock support registered Jul 15 04:38:53.178438 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 04:38:53.199369 kernel: hv_utils: Registering HyperV Utility Driver Jul 15 04:38:53.199387 kernel: hv_vmbus: registering driver hv_utils Jul 15 04:38:53.196956 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 04:38:53.206255 kernel: hv_utils: Shutdown IC version 3.2 Jul 15 04:38:53.210513 kernel: hv_utils: Heartbeat IC version 3.0 Jul 15 04:38:53.213697 kernel: hv_utils: TimeSync IC version 4.0 Jul 15 04:38:53.215039 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 04:38:53.569976 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 15 04:38:53.570115 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 15 04:38:53.570123 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 15 04:38:53.570201 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 15 04:38:53.570264 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 15 04:38:53.215115 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:38:53.605662 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 15 04:38:53.605813 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 15 04:38:53.605878 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 15 04:38:53.605938 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#57 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 15 04:38:53.606011 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#0 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 15 04:38:53.606067 kernel: hv_netvsc 0022487d-6654-0022-487d-66540022487d eth0: VF slot 1 added Jul 15 04:38:53.606132 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 04:38:53.606139 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 15 04:38:53.546777 systemd-resolved[261]: Clock change detected. Flushing caches. Jul 15 04:38:53.614013 kernel: hv_vmbus: registering driver hv_pci Jul 15 04:38:53.557811 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 04:38:53.629475 kernel: hv_pci b7e07d66-ee9d-4712-adf1-ee77695ac018: PCI VMBus probing: Using version 0x10004 Jul 15 04:38:53.629612 kernel: hv_pci b7e07d66-ee9d-4712-adf1-ee77695ac018: PCI host bridge to bus ee9d:00 Jul 15 04:38:53.634237 kernel: pci_bus ee9d:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jul 15 04:38:53.639020 kernel: pci_bus ee9d:00: No busn resource found for root bus, will use [bus 00-ff] Jul 15 04:38:53.648254 kernel: pci ee9d:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Jul 15 04:38:53.648126 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:38:53.669044 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#199 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 15 04:38:53.669197 kernel: pci ee9d:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 15 04:38:53.669221 kernel: pci ee9d:00:02.0: enabling Extended Tags Jul 15 04:38:53.689830 kernel: pci ee9d:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at ee9d:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Jul 15 04:38:53.700421 kernel: pci_bus ee9d:00: busn_res: [bus 00-ff] end is updated to 00 Jul 15 04:38:53.700601 kernel: pci ee9d:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Jul 15 04:38:53.707736 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#222 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 15 04:38:53.766795 kernel: mlx5_core ee9d:00:02.0: enabling device (0000 -> 0002) Jul 15 04:38:53.774843 kernel: mlx5_core ee9d:00:02.0: PTM is not supported by PCIe Jul 15 04:38:53.775003 kernel: mlx5_core ee9d:00:02.0: firmware version: 16.30.5006 Jul 15 04:38:53.945276 kernel: hv_netvsc 0022487d-6654-0022-487d-66540022487d eth0: VF registering: eth1 Jul 15 04:38:53.945476 kernel: mlx5_core ee9d:00:02.0 eth1: joined to eth0 Jul 15 04:38:53.950829 kernel: mlx5_core ee9d:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jul 15 04:38:53.958753 kernel: mlx5_core ee9d:00:02.0 enP61085s1: renamed from eth1 Jul 15 04:38:54.160235 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 15 04:38:54.201458 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 15 04:38:54.230806 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 15 04:38:54.240354 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 15 04:38:54.248605 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 15 04:38:54.279876 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 15 04:38:54.294790 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 15 04:38:54.290425 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 15 04:38:54.299860 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 04:38:54.308568 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 04:38:54.333906 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 04:38:54.325835 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 04:38:54.338949 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 15 04:38:54.364504 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 15 04:38:55.341247 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 15 04:38:55.353148 disk-uuid[650]: The operation has completed successfully. Jul 15 04:38:55.356572 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 04:38:55.414690 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 04:38:55.414794 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 15 04:38:55.439835 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 15 04:38:55.455758 sh[818]: Success Jul 15 04:38:55.488784 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 04:38:55.488841 kernel: device-mapper: uevent: version 1.0.3 Jul 15 04:38:55.493721 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 15 04:38:55.503733 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 15 04:38:55.678749 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 15 04:38:55.686081 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 15 04:38:55.707154 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 15 04:38:55.734347 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 15 04:38:55.734394 kernel: BTRFS: device fsid a7b7592d-2d1d-4236-b04f-dc58147b4692 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (843) Jul 15 04:38:55.740109 kernel: BTRFS info (device dm-0): first mount of filesystem a7b7592d-2d1d-4236-b04f-dc58147b4692 Jul 15 04:38:55.744397 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 15 04:38:55.747335 kernel: BTRFS info (device dm-0): using free-space-tree Jul 15 04:38:56.011676 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 15 04:38:56.015514 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 15 04:38:56.021906 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 15 04:38:56.022597 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 15 04:38:56.045331 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 15 04:38:56.068767 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (866) Jul 15 04:38:56.068790 kernel: BTRFS info (device sda6): first mount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:38:56.072792 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 04:38:56.075565 kernel: BTRFS info (device sda6): using free-space-tree Jul 15 04:38:56.098758 kernel: BTRFS info (device sda6): last unmount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:38:56.099157 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 15 04:38:56.104565 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 15 04:38:56.165508 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 04:38:56.175599 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 04:38:56.208292 systemd-networkd[1012]: lo: Link UP Jul 15 04:38:56.208303 systemd-networkd[1012]: lo: Gained carrier Jul 15 04:38:56.209512 systemd-networkd[1012]: Enumeration completed Jul 15 04:38:56.209974 systemd-networkd[1012]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 04:38:56.209977 systemd-networkd[1012]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 04:38:56.210834 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 04:38:56.215557 systemd[1]: Reached target network.target - Network. Jul 15 04:38:56.267738 kernel: mlx5_core ee9d:00:02.0 enP61085s1: Link up Jul 15 04:38:56.299753 kernel: hv_netvsc 0022487d-6654-0022-487d-66540022487d eth0: Data path switched to VF: enP61085s1 Jul 15 04:38:56.300136 systemd-networkd[1012]: enP61085s1: Link UP Jul 15 04:38:56.302929 systemd-networkd[1012]: eth0: Link UP Jul 15 04:38:56.303005 systemd-networkd[1012]: eth0: Gained carrier Jul 15 04:38:56.303015 systemd-networkd[1012]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 04:38:56.316548 systemd-networkd[1012]: enP61085s1: Gained carrier Jul 15 04:38:56.329746 systemd-networkd[1012]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 15 04:38:56.924988 ignition[922]: Ignition 2.21.0 Jul 15 04:38:56.925002 ignition[922]: Stage: fetch-offline Jul 15 04:38:56.928153 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 04:38:56.925072 ignition[922]: no configs at "/usr/lib/ignition/base.d" Jul 15 04:38:56.936260 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 15 04:38:56.925078 ignition[922]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 15 04:38:56.925170 ignition[922]: parsed url from cmdline: "" Jul 15 04:38:56.925172 ignition[922]: no config URL provided Jul 15 04:38:56.925175 ignition[922]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 04:38:56.925180 ignition[922]: no config at "/usr/lib/ignition/user.ign" Jul 15 04:38:56.925184 ignition[922]: failed to fetch config: resource requires networking Jul 15 04:38:56.925397 ignition[922]: Ignition finished successfully Jul 15 04:38:56.968057 ignition[1022]: Ignition 2.21.0 Jul 15 04:38:56.968061 ignition[1022]: Stage: fetch Jul 15 04:38:56.968216 ignition[1022]: no configs at "/usr/lib/ignition/base.d" Jul 15 04:38:56.968222 ignition[1022]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 15 04:38:56.968295 ignition[1022]: parsed url from cmdline: "" Jul 15 04:38:56.968297 ignition[1022]: no config URL provided Jul 15 04:38:56.968300 ignition[1022]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 04:38:56.968311 ignition[1022]: no config at "/usr/lib/ignition/user.ign" Jul 15 04:38:56.968344 ignition[1022]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 15 04:38:57.061753 ignition[1022]: GET result: OK Jul 15 04:38:57.061817 ignition[1022]: config has been read from IMDS userdata Jul 15 04:38:57.061856 ignition[1022]: parsing config with SHA512: 85534e9f44b01f3b9c6b930cf99fe3cc4e49e165a32152ba6563e7c738ede9634940a65558cc38cbfe8782a788740db7f32602ff29efa3028d965bbcf294e6ea Jul 15 04:38:57.066587 unknown[1022]: fetched base config from "system" Jul 15 04:38:57.067149 ignition[1022]: fetch: fetch complete Jul 15 04:38:57.066597 unknown[1022]: fetched base config from "system" Jul 15 04:38:57.067153 ignition[1022]: fetch: fetch passed Jul 15 04:38:57.066600 unknown[1022]: fetched user config from "azure" Jul 15 04:38:57.067206 ignition[1022]: Ignition finished successfully Jul 15 04:38:57.068999 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 15 04:38:57.075078 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 15 04:38:57.110316 ignition[1029]: Ignition 2.21.0 Jul 15 04:38:57.110330 ignition[1029]: Stage: kargs Jul 15 04:38:57.114143 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 15 04:38:57.110480 ignition[1029]: no configs at "/usr/lib/ignition/base.d" Jul 15 04:38:57.119745 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 15 04:38:57.110487 ignition[1029]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 15 04:38:57.110997 ignition[1029]: kargs: kargs passed Jul 15 04:38:57.111034 ignition[1029]: Ignition finished successfully Jul 15 04:38:57.146121 ignition[1036]: Ignition 2.21.0 Jul 15 04:38:57.146133 ignition[1036]: Stage: disks Jul 15 04:38:57.146301 ignition[1036]: no configs at "/usr/lib/ignition/base.d" Jul 15 04:38:57.149977 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 15 04:38:57.146308 ignition[1036]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 15 04:38:57.155054 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 15 04:38:57.146956 ignition[1036]: disks: disks passed Jul 15 04:38:57.162090 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 15 04:38:57.146994 ignition[1036]: Ignition finished successfully Jul 15 04:38:57.171046 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 04:38:57.182607 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 04:38:57.191456 systemd[1]: Reached target basic.target - Basic System. Jul 15 04:38:57.199082 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 15 04:38:57.268370 systemd-fsck[1045]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jul 15 04:38:57.273816 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 15 04:38:57.279064 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 15 04:38:57.453736 kernel: EXT4-fs (sda9): mounted filesystem 4818953b-9d82-47bd-ab58-d0aa5641a19a r/w with ordered data mode. Quota mode: none. Jul 15 04:38:57.454633 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 15 04:38:57.458339 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 15 04:38:57.464939 systemd-networkd[1012]: eth0: Gained IPv6LL Jul 15 04:38:57.478315 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 04:38:57.485046 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 15 04:38:57.499292 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 15 04:38:57.509296 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 04:38:57.517424 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 04:38:57.545798 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1059) Jul 15 04:38:57.545819 kernel: BTRFS info (device sda6): first mount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:38:57.545826 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 04:38:57.545833 kernel: BTRFS info (device sda6): using free-space-tree Jul 15 04:38:57.542171 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 15 04:38:57.556855 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 15 04:38:57.567341 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 04:38:57.844817 systemd-networkd[1012]: enP61085s1: Gained IPv6LL Jul 15 04:38:57.873876 coreos-metadata[1061]: Jul 15 04:38:57.873 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 15 04:38:57.879903 coreos-metadata[1061]: Jul 15 04:38:57.879 INFO Fetch successful Jul 15 04:38:57.879903 coreos-metadata[1061]: Jul 15 04:38:57.879 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 15 04:38:57.891786 coreos-metadata[1061]: Jul 15 04:38:57.891 INFO Fetch successful Jul 15 04:38:57.904027 coreos-metadata[1061]: Jul 15 04:38:57.903 INFO wrote hostname ci-4396.0.0-n-16ec4aa50e to /sysroot/etc/hostname Jul 15 04:38:57.910532 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 15 04:38:58.082370 initrd-setup-root[1089]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 04:38:58.116362 initrd-setup-root[1096]: cut: /sysroot/etc/group: No such file or directory Jul 15 04:38:58.121692 initrd-setup-root[1103]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 04:38:58.126006 initrd-setup-root[1110]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 04:38:58.907922 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 15 04:38:58.914749 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 15 04:38:58.928882 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 15 04:38:58.940010 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 15 04:38:58.951100 kernel: BTRFS info (device sda6): last unmount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:38:58.968780 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 15 04:38:58.974434 ignition[1177]: INFO : Ignition 2.21.0 Jul 15 04:38:58.974434 ignition[1177]: INFO : Stage: mount Jul 15 04:38:58.974434 ignition[1177]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 04:38:58.974434 ignition[1177]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 15 04:38:58.974434 ignition[1177]: INFO : mount: mount passed Jul 15 04:38:58.974434 ignition[1177]: INFO : Ignition finished successfully Jul 15 04:38:58.977088 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 15 04:38:58.983542 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 15 04:38:59.007830 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 04:38:59.043318 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1191) Jul 15 04:38:59.043354 kernel: BTRFS info (device sda6): first mount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:38:59.047992 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 04:38:59.051027 kernel: BTRFS info (device sda6): using free-space-tree Jul 15 04:38:59.053277 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 04:38:59.081775 ignition[1208]: INFO : Ignition 2.21.0 Jul 15 04:38:59.081775 ignition[1208]: INFO : Stage: files Jul 15 04:38:59.087724 ignition[1208]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 04:38:59.087724 ignition[1208]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 15 04:38:59.087724 ignition[1208]: DEBUG : files: compiled without relabeling support, skipping Jul 15 04:38:59.087724 ignition[1208]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 04:38:59.087724 ignition[1208]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 04:38:59.111299 ignition[1208]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 04:38:59.111299 ignition[1208]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 04:38:59.111299 ignition[1208]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 04:38:59.100544 unknown[1208]: wrote ssh authorized keys file for user: core Jul 15 04:38:59.143749 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 15 04:38:59.143749 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jul 15 04:38:59.298348 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 15 04:38:59.414937 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 15 04:38:59.421966 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 15 04:38:59.421966 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 04:38:59.421966 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 04:38:59.421966 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 04:38:59.421966 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 04:38:59.421966 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 04:38:59.421966 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 04:38:59.421966 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 04:38:59.473117 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 04:38:59.473117 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 04:38:59.473117 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 15 04:38:59.495965 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 15 04:38:59.495965 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 15 04:38:59.495965 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jul 15 04:39:00.034839 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 15 04:39:02.915944 ignition[1208]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 15 04:39:02.915944 ignition[1208]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 15 04:39:03.240588 ignition[1208]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 04:39:03.258264 ignition[1208]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 04:39:03.258264 ignition[1208]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 15 04:39:03.278937 ignition[1208]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 15 04:39:03.278937 ignition[1208]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 04:39:03.278937 ignition[1208]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 04:39:03.278937 ignition[1208]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 04:39:03.278937 ignition[1208]: INFO : files: files passed Jul 15 04:39:03.278937 ignition[1208]: INFO : Ignition finished successfully Jul 15 04:39:03.259700 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 15 04:39:03.270798 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 15 04:39:03.305106 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 15 04:39:03.318324 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 04:39:03.340625 initrd-setup-root-after-ignition[1236]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 04:39:03.340625 initrd-setup-root-after-ignition[1236]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 15 04:39:03.318405 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 15 04:39:03.364078 initrd-setup-root-after-ignition[1240]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 04:39:03.325111 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 04:39:03.337468 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 15 04:39:03.345072 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 15 04:39:03.386869 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 04:39:03.389019 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 15 04:39:03.395243 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 15 04:39:03.403038 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 15 04:39:03.409938 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 15 04:39:03.410629 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 15 04:39:03.437831 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 04:39:03.443774 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 15 04:39:03.470963 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 15 04:39:03.475374 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 04:39:03.483700 systemd[1]: Stopped target timers.target - Timer Units. Jul 15 04:39:03.490954 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 04:39:03.491057 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 04:39:03.502533 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 15 04:39:03.510491 systemd[1]: Stopped target basic.target - Basic System. Jul 15 04:39:03.517330 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 15 04:39:03.524360 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 04:39:03.532768 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 15 04:39:03.540827 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 15 04:39:03.549052 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 15 04:39:03.556595 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 04:39:03.565572 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 15 04:39:03.573796 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 15 04:39:03.580786 systemd[1]: Stopped target swap.target - Swaps. Jul 15 04:39:03.587387 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 04:39:03.587546 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 15 04:39:03.597612 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 15 04:39:03.605373 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 04:39:03.613532 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 15 04:39:03.613624 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 04:39:03.623052 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 04:39:03.623194 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 15 04:39:03.635362 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 04:39:03.635500 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 04:39:03.643194 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 04:39:03.643302 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 15 04:39:03.650562 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 15 04:39:03.650662 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 15 04:39:03.659784 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 15 04:39:03.689830 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 15 04:39:03.697767 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 04:39:03.697876 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 04:39:03.715041 ignition[1261]: INFO : Ignition 2.21.0 Jul 15 04:39:03.715041 ignition[1261]: INFO : Stage: umount Jul 15 04:39:03.715041 ignition[1261]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 04:39:03.715041 ignition[1261]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 15 04:39:03.715041 ignition[1261]: INFO : umount: umount passed Jul 15 04:39:03.715041 ignition[1261]: INFO : Ignition finished successfully Jul 15 04:39:03.704773 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 04:39:03.704851 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 04:39:03.716666 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 04:39:03.717224 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 15 04:39:03.725455 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 04:39:03.725639 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 15 04:39:03.733992 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 04:39:03.734038 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 15 04:39:03.741694 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 15 04:39:03.741730 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 15 04:39:03.748244 systemd[1]: Stopped target network.target - Network. Jul 15 04:39:03.755223 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 04:39:03.755270 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 04:39:03.764032 systemd[1]: Stopped target paths.target - Path Units. Jul 15 04:39:03.770971 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 04:39:03.781553 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 04:39:03.787316 systemd[1]: Stopped target slices.target - Slice Units. Jul 15 04:39:03.795175 systemd[1]: Stopped target sockets.target - Socket Units. Jul 15 04:39:03.803579 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 04:39:03.803636 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 04:39:03.811093 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 04:39:03.811120 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 04:39:03.818442 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 04:39:03.818518 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 15 04:39:03.825540 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 15 04:39:03.825583 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 15 04:39:03.832586 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 15 04:39:03.840308 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 15 04:39:03.853907 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 04:39:03.854449 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 04:39:03.854538 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 15 04:39:03.869682 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 15 04:39:03.869918 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 04:39:03.869984 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 15 04:39:03.879458 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 04:39:03.879539 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 15 04:39:03.891067 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 15 04:39:03.891273 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 04:39:03.891338 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 15 04:39:03.899073 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 15 04:39:03.905850 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 04:39:03.905889 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 15 04:39:03.913426 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 04:39:03.913477 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 15 04:39:04.064396 kernel: hv_netvsc 0022487d-6654-0022-487d-66540022487d eth0: Data path switched from VF: enP61085s1 Jul 15 04:39:03.921785 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 15 04:39:03.930405 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 04:39:03.930467 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 04:39:03.936375 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 04:39:03.936449 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 04:39:03.947422 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 04:39:03.947459 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 15 04:39:03.957418 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 15 04:39:03.957477 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 04:39:03.969840 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 04:39:03.975034 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 04:39:03.975087 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 15 04:39:03.997018 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 04:39:03.998066 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 04:39:04.005408 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 04:39:04.005441 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 15 04:39:04.013911 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 04:39:04.013937 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 04:39:04.021018 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 04:39:04.021066 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 15 04:39:04.032680 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 04:39:04.032733 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 15 04:39:04.051473 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 04:39:04.051529 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 04:39:04.068594 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 15 04:39:04.083700 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 15 04:39:04.083767 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 04:39:04.097384 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 15 04:39:04.097426 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 04:39:04.102821 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 15 04:39:04.102863 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 04:39:04.111485 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 04:39:04.111525 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 04:39:04.122133 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 04:39:04.122173 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:39:04.135239 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 15 04:39:04.135278 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 15 04:39:04.135301 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 15 04:39:04.135327 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 04:39:04.135580 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 04:39:04.135678 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 15 04:39:04.156213 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 04:39:04.156332 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 15 04:39:04.163640 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 15 04:39:04.172818 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 15 04:39:04.320509 systemd-journald[225]: Received SIGTERM from PID 1 (systemd). Jul 15 04:39:04.213616 systemd[1]: Switching root. Jul 15 04:39:04.323018 systemd-journald[225]: Journal stopped Jul 15 04:39:08.651166 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 04:39:08.651204 kernel: SELinux: policy capability open_perms=1 Jul 15 04:39:08.651213 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 04:39:08.651219 kernel: SELinux: policy capability always_check_network=0 Jul 15 04:39:08.651229 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 04:39:08.651234 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 04:39:08.651240 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 04:39:08.651246 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 04:39:08.651251 kernel: SELinux: policy capability userspace_initial_context=0 Jul 15 04:39:08.651257 kernel: audit: type=1403 audit(1752554344.995:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 15 04:39:08.651264 systemd[1]: Successfully loaded SELinux policy in 145.196ms. Jul 15 04:39:08.651272 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.286ms. Jul 15 04:39:08.651279 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 04:39:08.651285 systemd[1]: Detected virtualization microsoft. Jul 15 04:39:08.651291 systemd[1]: Detected architecture arm64. Jul 15 04:39:08.651302 systemd[1]: Detected first boot. Jul 15 04:39:08.651309 systemd[1]: Hostname set to . Jul 15 04:39:08.651317 systemd[1]: Initializing machine ID from random generator. Jul 15 04:39:08.651324 zram_generator::config[1304]: No configuration found. Jul 15 04:39:08.651331 kernel: NET: Registered PF_VSOCK protocol family Jul 15 04:39:08.651337 systemd[1]: Populated /etc with preset unit settings. Jul 15 04:39:08.651343 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 15 04:39:08.651350 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 15 04:39:08.651356 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 15 04:39:08.651362 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 15 04:39:08.651368 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 15 04:39:08.651375 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 15 04:39:08.651381 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 15 04:39:08.651387 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 15 04:39:08.651394 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 15 04:39:08.651401 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 15 04:39:08.651407 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 15 04:39:08.651413 systemd[1]: Created slice user.slice - User and Session Slice. Jul 15 04:39:08.651419 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 04:39:08.651427 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 04:39:08.651433 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 15 04:39:08.651440 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 15 04:39:08.651446 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 15 04:39:08.651453 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 04:39:08.651459 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 15 04:39:08.651467 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 04:39:08.651473 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 04:39:08.651479 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 15 04:39:08.651486 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 15 04:39:08.651492 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 15 04:39:08.651499 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 15 04:39:08.651505 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 04:39:08.651511 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 04:39:08.651517 systemd[1]: Reached target slices.target - Slice Units. Jul 15 04:39:08.651523 systemd[1]: Reached target swap.target - Swaps. Jul 15 04:39:08.651529 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 15 04:39:08.651536 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 15 04:39:08.651543 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 15 04:39:08.651549 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 04:39:08.651556 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 04:39:08.651562 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 04:39:08.651568 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 15 04:39:08.651575 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 15 04:39:08.651582 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 15 04:39:08.651588 systemd[1]: Mounting media.mount - External Media Directory... Jul 15 04:39:08.651594 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 15 04:39:08.651601 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 15 04:39:08.651607 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 15 04:39:08.651613 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 04:39:08.651620 systemd[1]: Reached target machines.target - Containers. Jul 15 04:39:08.651626 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 15 04:39:08.651633 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 04:39:08.651639 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 04:39:08.651646 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 15 04:39:08.651652 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 04:39:08.651658 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 04:39:08.651664 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 04:39:08.651670 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 15 04:39:08.651677 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 04:39:08.651683 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 04:39:08.651690 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 15 04:39:08.651697 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 15 04:39:08.651703 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 15 04:39:08.651800 systemd[1]: Stopped systemd-fsck-usr.service. Jul 15 04:39:08.651811 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 04:39:08.651818 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 04:39:08.651824 kernel: fuse: init (API version 7.41) Jul 15 04:39:08.651830 kernel: loop: module loaded Jul 15 04:39:08.651837 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 04:39:08.651844 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 04:39:08.651850 kernel: ACPI: bus type drm_connector registered Jul 15 04:39:08.651893 systemd-journald[1408]: Collecting audit messages is disabled. Jul 15 04:39:08.651910 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 15 04:39:08.651918 systemd-journald[1408]: Journal started Jul 15 04:39:08.651933 systemd-journald[1408]: Runtime Journal (/run/log/journal/a2c218be38004592a399692bb6353eed) is 8M, max 78.5M, 70.5M free. Jul 15 04:39:07.918928 systemd[1]: Queued start job for default target multi-user.target. Jul 15 04:39:07.937222 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 15 04:39:07.937603 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 15 04:39:07.937873 systemd[1]: systemd-journald.service: Consumed 2.301s CPU time. Jul 15 04:39:08.672508 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 15 04:39:08.685891 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 04:39:08.693052 systemd[1]: verity-setup.service: Deactivated successfully. Jul 15 04:39:08.693099 systemd[1]: Stopped verity-setup.service. Jul 15 04:39:08.706667 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 04:39:08.709028 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 15 04:39:08.713990 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 15 04:39:08.719860 systemd[1]: Mounted media.mount - External Media Directory. Jul 15 04:39:08.723655 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 15 04:39:08.728257 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 15 04:39:08.732891 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 15 04:39:08.736864 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 15 04:39:08.742198 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 04:39:08.749089 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 04:39:08.749304 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 15 04:39:08.754417 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 04:39:08.754635 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 04:39:08.760352 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 04:39:08.760545 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 04:39:08.765262 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 04:39:08.765487 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 04:39:08.771400 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 04:39:08.771603 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 15 04:39:08.777961 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 04:39:08.778175 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 04:39:08.788394 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 04:39:08.794101 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 04:39:08.799881 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 15 04:39:08.806076 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 15 04:39:08.812361 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 04:39:08.826476 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 04:39:08.832800 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 15 04:39:08.844611 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 15 04:39:08.849003 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 04:39:08.849029 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 04:39:08.854220 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 15 04:39:08.859914 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 15 04:39:08.863896 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 04:39:08.870288 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 15 04:39:08.875530 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 15 04:39:08.879933 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 04:39:08.880565 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 15 04:39:08.884872 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 04:39:08.886494 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 04:39:08.891846 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 15 04:39:08.898827 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 04:39:08.907352 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 15 04:39:08.918241 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 15 04:39:08.924325 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 15 04:39:08.931339 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 15 04:39:08.937744 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 15 04:39:08.965059 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 04:39:08.998211 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 04:39:08.998821 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 15 04:39:09.006816 systemd-journald[1408]: Time spent on flushing to /var/log/journal/a2c218be38004592a399692bb6353eed is 8.970ms for 945 entries. Jul 15 04:39:09.006816 systemd-journald[1408]: System Journal (/var/log/journal/a2c218be38004592a399692bb6353eed) is 8M, max 2.6G, 2.6G free. Jul 15 04:39:09.030538 kernel: loop0: detected capacity change from 0 to 134232 Jul 15 04:39:09.030558 systemd-journald[1408]: Received client request to flush runtime journal. Jul 15 04:39:09.032208 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 15 04:39:09.041984 systemd-tmpfiles[1445]: ACLs are not supported, ignoring. Jul 15 04:39:09.041995 systemd-tmpfiles[1445]: ACLs are not supported, ignoring. Jul 15 04:39:09.057841 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 04:39:09.066946 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 15 04:39:09.225778 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 15 04:39:09.232268 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 04:39:09.249896 systemd-tmpfiles[1461]: ACLs are not supported, ignoring. Jul 15 04:39:09.249906 systemd-tmpfiles[1461]: ACLs are not supported, ignoring. Jul 15 04:39:09.253008 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 04:39:09.444774 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 04:39:09.457734 kernel: loop1: detected capacity change from 0 to 211168 Jul 15 04:39:09.505743 kernel: loop2: detected capacity change from 0 to 105936 Jul 15 04:39:09.765406 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 15 04:39:09.771586 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 04:39:09.795102 systemd-udevd[1469]: Using default interface naming scheme 'v255'. Jul 15 04:39:09.797778 kernel: loop3: detected capacity change from 0 to 28800 Jul 15 04:39:10.081824 kernel: loop4: detected capacity change from 0 to 134232 Jul 15 04:39:10.089728 kernel: loop5: detected capacity change from 0 to 211168 Jul 15 04:39:10.096731 kernel: loop6: detected capacity change from 0 to 105936 Jul 15 04:39:10.103717 kernel: loop7: detected capacity change from 0 to 28800 Jul 15 04:39:10.106111 (sd-merge)[1471]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 15 04:39:10.106473 (sd-merge)[1471]: Merged extensions into '/usr'. Jul 15 04:39:10.109539 systemd[1]: Reload requested from client PID 1443 ('systemd-sysext') (unit systemd-sysext.service)... Jul 15 04:39:10.109610 systemd[1]: Reloading... Jul 15 04:39:10.162738 zram_generator::config[1500]: No configuration found. Jul 15 04:39:10.301077 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 04:39:10.313888 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 15 04:39:10.399244 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 15 04:39:10.399301 systemd[1]: Reloading finished in 289 ms. Jul 15 04:39:10.435212 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 04:39:10.443034 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 15 04:39:10.459734 kernel: hv_vmbus: registering driver hv_balloon Jul 15 04:39:10.459795 kernel: mousedev: PS/2 mouse device common for all mice Jul 15 04:39:10.459805 kernel: hv_vmbus: registering driver hyperv_fb Jul 15 04:39:10.467931 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 15 04:39:10.477198 kernel: hv_balloon: Memory hot add disabled on ARM64 Jul 15 04:39:10.477225 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 15 04:39:10.477233 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 15 04:39:10.480187 systemd[1]: Starting ensure-sysext.service... Jul 15 04:39:10.487341 kernel: Console: switching to colour dummy device 80x25 Jul 15 04:39:10.489895 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 04:39:10.495755 kernel: Console: switching to colour frame buffer device 128x48 Jul 15 04:39:10.511230 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 04:39:10.534578 systemd[1]: Reload requested from client PID 1617 ('systemctl') (unit ensure-sysext.service)... Jul 15 04:39:10.534590 systemd[1]: Reloading... Jul 15 04:39:10.563783 systemd-tmpfiles[1619]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 15 04:39:10.563803 systemd-tmpfiles[1619]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 15 04:39:10.564018 systemd-tmpfiles[1619]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 04:39:10.564153 systemd-tmpfiles[1619]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 15 04:39:10.564565 systemd-tmpfiles[1619]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 04:39:10.564706 systemd-tmpfiles[1619]: ACLs are not supported, ignoring. Jul 15 04:39:10.564749 systemd-tmpfiles[1619]: ACLs are not supported, ignoring. Jul 15 04:39:10.567212 systemd-tmpfiles[1619]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 04:39:10.567223 systemd-tmpfiles[1619]: Skipping /boot Jul 15 04:39:10.574212 systemd-tmpfiles[1619]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 04:39:10.574223 systemd-tmpfiles[1619]: Skipping /boot Jul 15 04:39:10.626733 zram_generator::config[1680]: No configuration found. Jul 15 04:39:10.681740 kernel: MACsec IEEE 802.1AE Jul 15 04:39:10.708248 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 04:39:10.787027 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 15 04:39:10.793193 systemd[1]: Reloading finished in 258 ms. Jul 15 04:39:10.817114 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 04:39:10.848741 systemd[1]: Finished ensure-sysext.service. Jul 15 04:39:10.856153 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 04:39:10.864843 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 15 04:39:10.870150 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 04:39:10.871793 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 04:39:10.882838 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 04:39:10.890643 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 04:39:10.899850 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 04:39:10.905620 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 04:39:10.906286 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 15 04:39:10.913122 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 04:39:10.914274 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 15 04:39:10.925841 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 04:39:10.932084 systemd[1]: Reached target time-set.target - System Time Set. Jul 15 04:39:10.939163 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 15 04:39:10.952383 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 15 04:39:10.958853 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 04:39:10.965446 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 04:39:10.966088 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 04:39:10.972333 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 04:39:10.973816 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 04:39:10.980252 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 04:39:10.980405 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 04:39:10.985476 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 04:39:10.985601 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 04:39:10.995497 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 04:39:10.995646 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 04:39:10.999794 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 15 04:39:11.009190 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 15 04:39:11.015827 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 15 04:39:11.026288 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 15 04:39:11.044638 augenrules[1806]: No rules Jul 15 04:39:11.045618 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 04:39:11.045909 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 04:39:11.118236 systemd-resolved[1775]: Positive Trust Anchors: Jul 15 04:39:11.118520 systemd-resolved[1775]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 04:39:11.118583 systemd-resolved[1775]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 04:39:11.135545 systemd-resolved[1775]: Using system hostname 'ci-4396.0.0-n-16ec4aa50e'. Jul 15 04:39:11.136634 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 04:39:11.141738 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 04:39:11.304534 systemd-networkd[1618]: lo: Link UP Jul 15 04:39:11.304546 systemd-networkd[1618]: lo: Gained carrier Jul 15 04:39:11.306298 systemd-networkd[1618]: Enumeration completed Jul 15 04:39:11.306418 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 04:39:11.311390 systemd-networkd[1618]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 04:39:11.311398 systemd-networkd[1618]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 04:39:11.311934 systemd[1]: Reached target network.target - Network. Jul 15 04:39:11.316776 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 15 04:39:11.322945 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 15 04:39:11.362729 kernel: mlx5_core ee9d:00:02.0 enP61085s1: Link up Jul 15 04:39:11.385778 kernel: hv_netvsc 0022487d-6654-0022-487d-66540022487d eth0: Data path switched to VF: enP61085s1 Jul 15 04:39:11.387687 systemd-networkd[1618]: enP61085s1: Link UP Jul 15 04:39:11.388995 systemd-networkd[1618]: eth0: Link UP Jul 15 04:39:11.389010 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 15 04:39:11.389166 systemd-networkd[1618]: eth0: Gained carrier Jul 15 04:39:11.389191 systemd-networkd[1618]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 04:39:11.397041 systemd-networkd[1618]: enP61085s1: Gained carrier Jul 15 04:39:11.409789 systemd-networkd[1618]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 15 04:39:11.418078 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:39:11.576278 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 15 04:39:11.581502 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 04:39:12.820875 systemd-networkd[1618]: enP61085s1: Gained IPv6LL Jul 15 04:39:13.012870 systemd-networkd[1618]: eth0: Gained IPv6LL Jul 15 04:39:13.015199 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 15 04:39:13.021360 systemd[1]: Reached target network-online.target - Network is Online. Jul 15 04:39:13.559437 ldconfig[1438]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 04:39:13.568918 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 15 04:39:13.574368 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 15 04:39:13.585534 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 15 04:39:13.590473 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 04:39:13.594660 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 15 04:39:13.599694 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 15 04:39:13.604566 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 15 04:39:13.608462 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 15 04:39:13.613307 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 15 04:39:13.617993 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 04:39:13.618020 systemd[1]: Reached target paths.target - Path Units. Jul 15 04:39:13.621423 systemd[1]: Reached target timers.target - Timer Units. Jul 15 04:39:13.625821 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 15 04:39:13.631468 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 15 04:39:13.636539 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 15 04:39:13.641745 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 15 04:39:13.647875 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 15 04:39:13.653979 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 15 04:39:13.670418 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 15 04:39:13.675746 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 15 04:39:13.679869 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 04:39:13.683539 systemd[1]: Reached target basic.target - Basic System. Jul 15 04:39:13.687460 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 15 04:39:13.687561 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 15 04:39:13.689382 systemd[1]: Starting chronyd.service - NTP client/server... Jul 15 04:39:13.705775 systemd[1]: Starting containerd.service - containerd container runtime... Jul 15 04:39:13.712840 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 15 04:39:13.721867 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 15 04:39:13.726318 (chronyd)[1829]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 15 04:39:13.726831 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 15 04:39:13.747258 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 15 04:39:13.749006 chronyd[1839]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 15 04:39:13.753828 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 15 04:39:13.757850 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 15 04:39:13.758678 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jul 15 04:39:13.764561 jq[1837]: false Jul 15 04:39:13.765061 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jul 15 04:39:13.766427 KVP[1841]: KVP starting; pid is:1841 Jul 15 04:39:13.766765 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:39:13.767513 chronyd[1839]: Timezone right/UTC failed leap second check, ignoring Jul 15 04:39:13.771705 chronyd[1839]: Loaded seccomp filter (level 2) Jul 15 04:39:13.775509 KVP[1841]: KVP LIC Version: 3.1 Jul 15 04:39:13.775757 kernel: hv_utils: KVP IC version 4.0 Jul 15 04:39:13.776143 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 15 04:39:13.783532 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 15 04:39:13.793786 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 15 04:39:13.798701 extend-filesystems[1840]: Found /dev/sda6 Jul 15 04:39:13.805064 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 15 04:39:13.817829 extend-filesystems[1840]: Found /dev/sda9 Jul 15 04:39:13.813864 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 15 04:39:13.822487 extend-filesystems[1840]: Checking size of /dev/sda9 Jul 15 04:39:13.826949 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 15 04:39:13.832065 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 15 04:39:13.833216 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 15 04:39:13.837836 systemd[1]: Starting update-engine.service - Update Engine... Jul 15 04:39:13.842152 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 15 04:39:13.848031 systemd[1]: Started chronyd.service - NTP client/server. Jul 15 04:39:13.853147 jq[1866]: true Jul 15 04:39:13.854907 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 15 04:39:13.860623 extend-filesystems[1840]: Old size kept for /dev/sda9 Jul 15 04:39:13.862164 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 04:39:13.862320 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 15 04:39:13.862517 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 04:39:13.862642 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 15 04:39:13.879571 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 04:39:13.879741 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 15 04:39:13.887378 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 15 04:39:13.894161 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 04:39:13.894319 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 15 04:39:13.904628 update_engine[1863]: I20250715 04:39:13.904549 1863 main.cc:92] Flatcar Update Engine starting Jul 15 04:39:13.916856 jq[1880]: true Jul 15 04:39:13.917436 (ntainerd)[1881]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 15 04:39:13.926022 systemd-logind[1861]: New seat seat0. Jul 15 04:39:13.931918 systemd-logind[1861]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 15 04:39:13.932074 systemd[1]: Started systemd-logind.service - User Login Management. Jul 15 04:39:13.964161 tar[1877]: linux-arm64/LICENSE Jul 15 04:39:13.964462 tar[1877]: linux-arm64/helm Jul 15 04:39:14.042402 bash[1928]: Updated "/home/core/.ssh/authorized_keys" Jul 15 04:39:14.043774 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 15 04:39:14.052264 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 15 04:39:14.081925 dbus-daemon[1834]: [system] SELinux support is enabled Jul 15 04:39:14.082066 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 15 04:39:14.091474 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 04:39:14.092034 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 15 04:39:14.101275 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 04:39:14.101408 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 15 04:39:14.109119 update_engine[1863]: I20250715 04:39:14.108666 1863 update_check_scheduler.cc:74] Next update check in 4m6s Jul 15 04:39:14.112447 systemd[1]: Started update-engine.service - Update Engine. Jul 15 04:39:14.112808 dbus-daemon[1834]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 15 04:39:14.125354 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 15 04:39:14.176492 coreos-metadata[1831]: Jul 15 04:39:14.174 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 15 04:39:14.186731 coreos-metadata[1831]: Jul 15 04:39:14.184 INFO Fetch successful Jul 15 04:39:14.186731 coreos-metadata[1831]: Jul 15 04:39:14.184 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 15 04:39:14.189534 coreos-metadata[1831]: Jul 15 04:39:14.189 INFO Fetch successful Jul 15 04:39:14.189534 coreos-metadata[1831]: Jul 15 04:39:14.189 INFO Fetching http://168.63.129.16/machine/b6fc3542-1ed1-4adc-96ff-cd1b4aaf30af/fbfbd0c1%2D8cdf%2D4d4f%2D984b%2De626ea03d5c0.%5Fci%2D4396.0.0%2Dn%2D16ec4aa50e?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 15 04:39:14.198347 coreos-metadata[1831]: Jul 15 04:39:14.198 INFO Fetch successful Jul 15 04:39:14.198347 coreos-metadata[1831]: Jul 15 04:39:14.198 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 15 04:39:14.212367 coreos-metadata[1831]: Jul 15 04:39:14.210 INFO Fetch successful Jul 15 04:39:14.267019 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 15 04:39:14.274227 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 15 04:39:14.353720 sshd_keygen[1876]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 04:39:14.382012 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 15 04:39:14.391952 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 15 04:39:14.401802 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 15 04:39:14.415225 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 04:39:14.417058 locksmithd[1963]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 04:39:14.417906 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 15 04:39:14.426290 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 15 04:39:14.453315 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 15 04:39:14.465985 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 15 04:39:14.480752 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 15 04:39:14.489246 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 15 04:39:14.497423 systemd[1]: Reached target getty.target - Login Prompts. Jul 15 04:39:14.513561 tar[1877]: linux-arm64/README.md Jul 15 04:39:14.530666 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 15 04:39:14.548726 containerd[1881]: time="2025-07-15T04:39:14Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 15 04:39:14.549761 containerd[1881]: time="2025-07-15T04:39:14.549453768Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 15 04:39:14.556443 containerd[1881]: time="2025-07-15T04:39:14.556400312Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.944µs" Jul 15 04:39:14.556443 containerd[1881]: time="2025-07-15T04:39:14.556434232Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 15 04:39:14.556443 containerd[1881]: time="2025-07-15T04:39:14.556448392Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 15 04:39:14.556610 containerd[1881]: time="2025-07-15T04:39:14.556591552Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 15 04:39:14.556610 containerd[1881]: time="2025-07-15T04:39:14.556607872Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 15 04:39:14.556644 containerd[1881]: time="2025-07-15T04:39:14.556625832Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 04:39:14.556679 containerd[1881]: time="2025-07-15T04:39:14.556662288Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 04:39:14.556679 containerd[1881]: time="2025-07-15T04:39:14.556676016Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 04:39:14.557245 containerd[1881]: time="2025-07-15T04:39:14.556877312Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 04:39:14.557245 containerd[1881]: time="2025-07-15T04:39:14.556891384Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 04:39:14.557245 containerd[1881]: time="2025-07-15T04:39:14.556899248Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 04:39:14.557245 containerd[1881]: time="2025-07-15T04:39:14.556904944Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 15 04:39:14.557245 containerd[1881]: time="2025-07-15T04:39:14.556972096Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 15 04:39:14.557245 containerd[1881]: time="2025-07-15T04:39:14.557130416Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 04:39:14.557245 containerd[1881]: time="2025-07-15T04:39:14.557150000Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 04:39:14.557245 containerd[1881]: time="2025-07-15T04:39:14.557157032Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 15 04:39:14.557245 containerd[1881]: time="2025-07-15T04:39:14.557180104Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 15 04:39:14.557446 containerd[1881]: time="2025-07-15T04:39:14.557338168Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 15 04:39:14.557446 containerd[1881]: time="2025-07-15T04:39:14.557404616Z" level=info msg="metadata content store policy set" policy=shared Jul 15 04:39:14.572150 containerd[1881]: time="2025-07-15T04:39:14.572105416Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 15 04:39:14.572984 containerd[1881]: time="2025-07-15T04:39:14.572955136Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 15 04:39:14.573032 containerd[1881]: time="2025-07-15T04:39:14.572989080Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 15 04:39:14.573032 containerd[1881]: time="2025-07-15T04:39:14.573000528Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 15 04:39:14.573032 containerd[1881]: time="2025-07-15T04:39:14.573009544Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 15 04:39:14.573085 containerd[1881]: time="2025-07-15T04:39:14.573040840Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 15 04:39:14.573085 containerd[1881]: time="2025-07-15T04:39:14.573049392Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 15 04:39:14.573085 containerd[1881]: time="2025-07-15T04:39:14.573058952Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 15 04:39:14.573085 containerd[1881]: time="2025-07-15T04:39:14.573066912Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 15 04:39:14.573085 containerd[1881]: time="2025-07-15T04:39:14.573073336Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 15 04:39:14.573085 containerd[1881]: time="2025-07-15T04:39:14.573079040Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 15 04:39:14.573085 containerd[1881]: time="2025-07-15T04:39:14.573087088Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 15 04:39:14.573780 containerd[1881]: time="2025-07-15T04:39:14.573756840Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 15 04:39:14.573796 containerd[1881]: time="2025-07-15T04:39:14.573784200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 15 04:39:14.573816 containerd[1881]: time="2025-07-15T04:39:14.573796656Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 15 04:39:14.573816 containerd[1881]: time="2025-07-15T04:39:14.573804720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 15 04:39:14.573844 containerd[1881]: time="2025-07-15T04:39:14.573818584Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 15 04:39:14.573844 containerd[1881]: time="2025-07-15T04:39:14.573826744Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 15 04:39:14.573844 containerd[1881]: time="2025-07-15T04:39:14.573837208Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 15 04:39:14.573844 containerd[1881]: time="2025-07-15T04:39:14.573844088Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 15 04:39:14.573897 containerd[1881]: time="2025-07-15T04:39:14.573863216Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 15 04:39:14.573897 containerd[1881]: time="2025-07-15T04:39:14.573870040Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 15 04:39:14.573897 containerd[1881]: time="2025-07-15T04:39:14.573877104Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 15 04:39:14.574337 containerd[1881]: time="2025-07-15T04:39:14.574316216Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 15 04:39:14.574353 containerd[1881]: time="2025-07-15T04:39:14.574339920Z" level=info msg="Start snapshots syncer" Jul 15 04:39:14.574372 containerd[1881]: time="2025-07-15T04:39:14.574362160Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 15 04:39:14.575485 containerd[1881]: time="2025-07-15T04:39:14.575441320Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 15 04:39:14.575602 containerd[1881]: time="2025-07-15T04:39:14.575502424Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 15 04:39:14.576751 containerd[1881]: time="2025-07-15T04:39:14.576004296Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 15 04:39:14.576751 containerd[1881]: time="2025-07-15T04:39:14.576164784Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 15 04:39:14.576751 containerd[1881]: time="2025-07-15T04:39:14.576190976Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 15 04:39:14.576751 containerd[1881]: time="2025-07-15T04:39:14.576201784Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 15 04:39:14.576751 containerd[1881]: time="2025-07-15T04:39:14.576210568Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 15 04:39:14.576751 containerd[1881]: time="2025-07-15T04:39:14.576219352Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 15 04:39:14.576751 containerd[1881]: time="2025-07-15T04:39:14.576227184Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 15 04:39:14.576751 containerd[1881]: time="2025-07-15T04:39:14.576237840Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 15 04:39:14.576751 containerd[1881]: time="2025-07-15T04:39:14.576263856Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 15 04:39:14.576751 containerd[1881]: time="2025-07-15T04:39:14.576271784Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 15 04:39:14.576751 containerd[1881]: time="2025-07-15T04:39:14.576279496Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 15 04:39:14.576751 containerd[1881]: time="2025-07-15T04:39:14.576303080Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 04:39:14.576751 containerd[1881]: time="2025-07-15T04:39:14.576312528Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 04:39:14.576751 containerd[1881]: time="2025-07-15T04:39:14.576330672Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 04:39:14.577003 containerd[1881]: time="2025-07-15T04:39:14.576336912Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 04:39:14.577003 containerd[1881]: time="2025-07-15T04:39:14.576341264Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 15 04:39:14.577003 containerd[1881]: time="2025-07-15T04:39:14.576347256Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 15 04:39:14.577003 containerd[1881]: time="2025-07-15T04:39:14.576354816Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 15 04:39:14.577003 containerd[1881]: time="2025-07-15T04:39:14.576368008Z" level=info msg="runtime interface created" Jul 15 04:39:14.577003 containerd[1881]: time="2025-07-15T04:39:14.576371168Z" level=info msg="created NRI interface" Jul 15 04:39:14.577003 containerd[1881]: time="2025-07-15T04:39:14.576376200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 15 04:39:14.577003 containerd[1881]: time="2025-07-15T04:39:14.576384896Z" level=info msg="Connect containerd service" Jul 15 04:39:14.577003 containerd[1881]: time="2025-07-15T04:39:14.576404688Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 15 04:39:14.577637 containerd[1881]: time="2025-07-15T04:39:14.577606272Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 04:39:14.672851 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:39:14.678090 (kubelet)[2027]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 04:39:14.916985 kubelet[2027]: E0715 04:39:14.916930 2027 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 04:39:14.919185 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 04:39:14.919294 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 04:39:14.919586 systemd[1]: kubelet.service: Consumed 545ms CPU time, 257.7M memory peak. Jul 15 04:39:15.168015 containerd[1881]: time="2025-07-15T04:39:15.167703352Z" level=info msg="Start subscribing containerd event" Jul 15 04:39:15.168015 containerd[1881]: time="2025-07-15T04:39:15.167970232Z" level=info msg="Start recovering state" Jul 15 04:39:15.168240 containerd[1881]: time="2025-07-15T04:39:15.168217568Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 04:39:15.168282 containerd[1881]: time="2025-07-15T04:39:15.168276608Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 04:39:15.169618 containerd[1881]: time="2025-07-15T04:39:15.168798984Z" level=info msg="Start event monitor" Jul 15 04:39:15.169618 containerd[1881]: time="2025-07-15T04:39:15.168824096Z" level=info msg="Start cni network conf syncer for default" Jul 15 04:39:15.169618 containerd[1881]: time="2025-07-15T04:39:15.168831008Z" level=info msg="Start streaming server" Jul 15 04:39:15.169618 containerd[1881]: time="2025-07-15T04:39:15.168842472Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 15 04:39:15.169618 containerd[1881]: time="2025-07-15T04:39:15.168848872Z" level=info msg="runtime interface starting up..." Jul 15 04:39:15.169618 containerd[1881]: time="2025-07-15T04:39:15.168854640Z" level=info msg="starting plugins..." Jul 15 04:39:15.169618 containerd[1881]: time="2025-07-15T04:39:15.168870856Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 15 04:39:15.169618 containerd[1881]: time="2025-07-15T04:39:15.168988016Z" level=info msg="containerd successfully booted in 0.623448s" Jul 15 04:39:15.169733 systemd[1]: Started containerd.service - containerd container runtime. Jul 15 04:39:15.177239 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 15 04:39:15.183403 systemd[1]: Startup finished in 1.625s (kernel) + 12.900s (initrd) + 10.330s (userspace) = 24.857s. Jul 15 04:39:15.459647 login[2013]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:39:15.459778 login[2012]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:39:15.470118 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 15 04:39:15.471067 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 15 04:39:15.475101 systemd-logind[1861]: New session 2 of user core. Jul 15 04:39:15.477830 systemd-logind[1861]: New session 1 of user core. Jul 15 04:39:15.490198 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 15 04:39:15.493044 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 15 04:39:15.506311 (systemd)[2050]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 04:39:15.508451 systemd-logind[1861]: New session c1 of user core. Jul 15 04:39:15.762517 systemd[2050]: Queued start job for default target default.target. Jul 15 04:39:15.767414 systemd[2050]: Created slice app.slice - User Application Slice. Jul 15 04:39:15.767435 systemd[2050]: Reached target paths.target - Paths. Jul 15 04:39:15.767465 systemd[2050]: Reached target timers.target - Timers. Jul 15 04:39:15.768530 systemd[2050]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 15 04:39:15.776375 systemd[2050]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 15 04:39:15.776502 systemd[2050]: Reached target sockets.target - Sockets. Jul 15 04:39:15.776605 systemd[2050]: Reached target basic.target - Basic System. Jul 15 04:39:15.776787 systemd[2050]: Reached target default.target - Main User Target. Jul 15 04:39:15.776890 systemd[2050]: Startup finished in 264ms. Jul 15 04:39:15.777801 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 15 04:39:15.778993 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 15 04:39:15.779567 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 15 04:39:15.789230 waagent[2010]: 2025-07-15T04:39:15.789166Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jul 15 04:39:15.794178 waagent[2010]: 2025-07-15T04:39:15.794133Z INFO Daemon Daemon OS: flatcar 4396.0.0 Jul 15 04:39:15.797887 waagent[2010]: 2025-07-15T04:39:15.797844Z INFO Daemon Daemon Python: 3.11.13 Jul 15 04:39:15.804006 waagent[2010]: 2025-07-15T04:39:15.803770Z INFO Daemon Daemon Run daemon Jul 15 04:39:15.809086 waagent[2010]: 2025-07-15T04:39:15.808859Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4396.0.0' Jul 15 04:39:15.816873 waagent[2010]: 2025-07-15T04:39:15.816818Z INFO Daemon Daemon Using waagent for provisioning Jul 15 04:39:15.822279 waagent[2010]: 2025-07-15T04:39:15.822210Z INFO Daemon Daemon Activate resource disk Jul 15 04:39:15.826134 waagent[2010]: 2025-07-15T04:39:15.826083Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 15 04:39:15.834127 waagent[2010]: 2025-07-15T04:39:15.834084Z INFO Daemon Daemon Found device: None Jul 15 04:39:15.839314 waagent[2010]: 2025-07-15T04:39:15.838316Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 15 04:39:15.845079 waagent[2010]: 2025-07-15T04:39:15.844816Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 15 04:39:15.853391 waagent[2010]: 2025-07-15T04:39:15.853335Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 15 04:39:15.858044 waagent[2010]: 2025-07-15T04:39:15.857990Z INFO Daemon Daemon Running default provisioning handler Jul 15 04:39:15.867171 waagent[2010]: 2025-07-15T04:39:15.867123Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 15 04:39:15.879762 waagent[2010]: 2025-07-15T04:39:15.878844Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 15 04:39:15.886576 waagent[2010]: 2025-07-15T04:39:15.886524Z INFO Daemon Daemon cloud-init is enabled: False Jul 15 04:39:15.890448 waagent[2010]: 2025-07-15T04:39:15.890417Z INFO Daemon Daemon Copying ovf-env.xml Jul 15 04:39:15.994287 waagent[2010]: 2025-07-15T04:39:15.994206Z INFO Daemon Daemon Successfully mounted dvd Jul 15 04:39:16.017551 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 15 04:39:16.019372 waagent[2010]: 2025-07-15T04:39:16.018974Z INFO Daemon Daemon Detect protocol endpoint Jul 15 04:39:16.022611 waagent[2010]: 2025-07-15T04:39:16.022573Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 15 04:39:16.026570 waagent[2010]: 2025-07-15T04:39:16.026538Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 15 04:39:16.030826 waagent[2010]: 2025-07-15T04:39:16.030797Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 15 04:39:16.034362 waagent[2010]: 2025-07-15T04:39:16.034329Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 15 04:39:16.038052 waagent[2010]: 2025-07-15T04:39:16.038022Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 15 04:39:16.062266 waagent[2010]: 2025-07-15T04:39:16.062230Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 15 04:39:16.067961 waagent[2010]: 2025-07-15T04:39:16.067936Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 15 04:39:16.072132 waagent[2010]: 2025-07-15T04:39:16.072105Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 15 04:39:16.144757 waagent[2010]: 2025-07-15T04:39:16.144297Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 15 04:39:16.149051 waagent[2010]: 2025-07-15T04:39:16.149013Z INFO Daemon Daemon Forcing an update of the goal state. Jul 15 04:39:16.156827 waagent[2010]: 2025-07-15T04:39:16.156792Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 15 04:39:16.173901 waagent[2010]: 2025-07-15T04:39:16.173869Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jul 15 04:39:16.179378 waagent[2010]: 2025-07-15T04:39:16.179341Z INFO Daemon Jul 15 04:39:16.181752 waagent[2010]: 2025-07-15T04:39:16.181719Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 98808a82-42fb-44a9-9c37-b3bae29a2d64 eTag: 15839609508870617671 source: Fabric] Jul 15 04:39:16.191572 waagent[2010]: 2025-07-15T04:39:16.191539Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 15 04:39:16.197832 waagent[2010]: 2025-07-15T04:39:16.197803Z INFO Daemon Jul 15 04:39:16.200464 waagent[2010]: 2025-07-15T04:39:16.200439Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 15 04:39:16.210245 waagent[2010]: 2025-07-15T04:39:16.210216Z INFO Daemon Daemon Downloading artifacts profile blob Jul 15 04:39:16.283655 waagent[2010]: 2025-07-15T04:39:16.283545Z INFO Daemon Downloaded certificate {'thumbprint': 'D9BFFCF02D999D9B5F2DCC154169D9DF8698D8C6', 'hasPrivateKey': True} Jul 15 04:39:16.291664 waagent[2010]: 2025-07-15T04:39:16.291625Z INFO Daemon Downloaded certificate {'thumbprint': '76B6AF984BDC19910D4662D4179F36246D785689', 'hasPrivateKey': False} Jul 15 04:39:16.299174 waagent[2010]: 2025-07-15T04:39:16.299138Z INFO Daemon Fetch goal state completed Jul 15 04:39:16.308879 waagent[2010]: 2025-07-15T04:39:16.308852Z INFO Daemon Daemon Starting provisioning Jul 15 04:39:16.312649 waagent[2010]: 2025-07-15T04:39:16.312618Z INFO Daemon Daemon Handle ovf-env.xml. Jul 15 04:39:16.316173 waagent[2010]: 2025-07-15T04:39:16.316150Z INFO Daemon Daemon Set hostname [ci-4396.0.0-n-16ec4aa50e] Jul 15 04:39:16.334808 waagent[2010]: 2025-07-15T04:39:16.334771Z INFO Daemon Daemon Publish hostname [ci-4396.0.0-n-16ec4aa50e] Jul 15 04:39:16.340571 waagent[2010]: 2025-07-15T04:39:16.340535Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 15 04:39:16.345593 waagent[2010]: 2025-07-15T04:39:16.345561Z INFO Daemon Daemon Primary interface is [eth0] Jul 15 04:39:16.355658 systemd-networkd[1618]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 04:39:16.355918 systemd-networkd[1618]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 04:39:16.355949 systemd-networkd[1618]: eth0: DHCP lease lost Jul 15 04:39:16.356581 waagent[2010]: 2025-07-15T04:39:16.356535Z INFO Daemon Daemon Create user account if not exists Jul 15 04:39:16.360813 waagent[2010]: 2025-07-15T04:39:16.360779Z INFO Daemon Daemon User core already exists, skip useradd Jul 15 04:39:16.365084 waagent[2010]: 2025-07-15T04:39:16.365057Z INFO Daemon Daemon Configure sudoer Jul 15 04:39:16.373392 waagent[2010]: 2025-07-15T04:39:16.373347Z INFO Daemon Daemon Configure sshd Jul 15 04:39:16.380434 waagent[2010]: 2025-07-15T04:39:16.380393Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 15 04:39:16.392579 waagent[2010]: 2025-07-15T04:39:16.392540Z INFO Daemon Daemon Deploy ssh public key. Jul 15 04:39:16.396758 systemd-networkd[1618]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 15 04:39:17.491736 waagent[2010]: 2025-07-15T04:39:17.490723Z INFO Daemon Daemon Provisioning complete Jul 15 04:39:17.505216 waagent[2010]: 2025-07-15T04:39:17.505179Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 15 04:39:17.509858 waagent[2010]: 2025-07-15T04:39:17.509827Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 15 04:39:17.516303 waagent[2010]: 2025-07-15T04:39:17.516278Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jul 15 04:39:17.614750 waagent[2109]: 2025-07-15T04:39:17.614083Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jul 15 04:39:17.614750 waagent[2109]: 2025-07-15T04:39:17.614213Z INFO ExtHandler ExtHandler OS: flatcar 4396.0.0 Jul 15 04:39:17.614750 waagent[2109]: 2025-07-15T04:39:17.614252Z INFO ExtHandler ExtHandler Python: 3.11.13 Jul 15 04:39:17.614750 waagent[2109]: 2025-07-15T04:39:17.614285Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Jul 15 04:39:17.661137 waagent[2109]: 2025-07-15T04:39:17.661073Z INFO ExtHandler ExtHandler Distro: flatcar-4396.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jul 15 04:39:17.661461 waagent[2109]: 2025-07-15T04:39:17.661429Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 15 04:39:17.661583 waagent[2109]: 2025-07-15T04:39:17.661559Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 15 04:39:17.667567 waagent[2109]: 2025-07-15T04:39:17.667519Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 15 04:39:17.672746 waagent[2109]: 2025-07-15T04:39:17.672605Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jul 15 04:39:17.673082 waagent[2109]: 2025-07-15T04:39:17.673048Z INFO ExtHandler Jul 15 04:39:17.673138 waagent[2109]: 2025-07-15T04:39:17.673119Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 97955c02-e928-4c95-8d0d-a43957183fa4 eTag: 15839609508870617671 source: Fabric] Jul 15 04:39:17.673357 waagent[2109]: 2025-07-15T04:39:17.673332Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 15 04:39:17.673782 waagent[2109]: 2025-07-15T04:39:17.673752Z INFO ExtHandler Jul 15 04:39:17.673822 waagent[2109]: 2025-07-15T04:39:17.673806Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 15 04:39:17.677317 waagent[2109]: 2025-07-15T04:39:17.677292Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 15 04:39:17.735246 waagent[2109]: 2025-07-15T04:39:17.735177Z INFO ExtHandler Downloaded certificate {'thumbprint': 'D9BFFCF02D999D9B5F2DCC154169D9DF8698D8C6', 'hasPrivateKey': True} Jul 15 04:39:17.735546 waagent[2109]: 2025-07-15T04:39:17.735514Z INFO ExtHandler Downloaded certificate {'thumbprint': '76B6AF984BDC19910D4662D4179F36246D785689', 'hasPrivateKey': False} Jul 15 04:39:17.735874 waagent[2109]: 2025-07-15T04:39:17.735845Z INFO ExtHandler Fetch goal state completed Jul 15 04:39:17.748061 waagent[2109]: 2025-07-15T04:39:17.747976Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.1 11 Feb 2025 (Library: OpenSSL 3.4.1 11 Feb 2025) Jul 15 04:39:17.751231 waagent[2109]: 2025-07-15T04:39:17.751185Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2109 Jul 15 04:39:17.751329 waagent[2109]: 2025-07-15T04:39:17.751305Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 15 04:39:17.751566 waagent[2109]: 2025-07-15T04:39:17.751538Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jul 15 04:39:17.752662 waagent[2109]: 2025-07-15T04:39:17.752627Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4396.0.0', '', 'Flatcar Container Linux by Kinvolk'] Jul 15 04:39:17.753009 waagent[2109]: 2025-07-15T04:39:17.752979Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4396.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jul 15 04:39:17.753125 waagent[2109]: 2025-07-15T04:39:17.753102Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jul 15 04:39:17.753557 waagent[2109]: 2025-07-15T04:39:17.753526Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 15 04:39:17.788441 waagent[2109]: 2025-07-15T04:39:17.788408Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 15 04:39:17.788589 waagent[2109]: 2025-07-15T04:39:17.788563Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 15 04:39:17.793066 waagent[2109]: 2025-07-15T04:39:17.793039Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 15 04:39:17.797783 systemd[1]: Reload requested from client PID 2126 ('systemctl') (unit waagent.service)... Jul 15 04:39:17.797797 systemd[1]: Reloading... Jul 15 04:39:17.862861 zram_generator::config[2164]: No configuration found. Jul 15 04:39:17.930392 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 04:39:18.013406 systemd[1]: Reloading finished in 215 ms. Jul 15 04:39:18.041734 waagent[2109]: 2025-07-15T04:39:18.040890Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 15 04:39:18.041734 waagent[2109]: 2025-07-15T04:39:18.041026Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 15 04:39:18.315770 waagent[2109]: 2025-07-15T04:39:18.315642Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 15 04:39:18.316009 waagent[2109]: 2025-07-15T04:39:18.315976Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jul 15 04:39:18.316648 waagent[2109]: 2025-07-15T04:39:18.316605Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 15 04:39:18.316964 waagent[2109]: 2025-07-15T04:39:18.316881Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 15 04:39:18.317684 waagent[2109]: 2025-07-15T04:39:18.317123Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 15 04:39:18.317684 waagent[2109]: 2025-07-15T04:39:18.317192Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 15 04:39:18.317684 waagent[2109]: 2025-07-15T04:39:18.317345Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 15 04:39:18.317684 waagent[2109]: 2025-07-15T04:39:18.317471Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 15 04:39:18.317684 waagent[2109]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 15 04:39:18.317684 waagent[2109]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jul 15 04:39:18.317684 waagent[2109]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 15 04:39:18.317684 waagent[2109]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 15 04:39:18.317684 waagent[2109]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 15 04:39:18.317684 waagent[2109]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 15 04:39:18.317951 waagent[2109]: 2025-07-15T04:39:18.317914Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 15 04:39:18.318089 waagent[2109]: 2025-07-15T04:39:18.318061Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 15 04:39:18.318147 waagent[2109]: 2025-07-15T04:39:18.318111Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 15 04:39:18.318367 waagent[2109]: 2025-07-15T04:39:18.318331Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 15 04:39:18.318401 waagent[2109]: 2025-07-15T04:39:18.318373Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 15 04:39:18.318815 waagent[2109]: 2025-07-15T04:39:18.318787Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 15 04:39:18.318911 waagent[2109]: 2025-07-15T04:39:18.318889Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 15 04:39:18.319084 waagent[2109]: 2025-07-15T04:39:18.319054Z INFO EnvHandler ExtHandler Configure routes Jul 15 04:39:18.319549 waagent[2109]: 2025-07-15T04:39:18.319530Z INFO EnvHandler ExtHandler Gateway:None Jul 15 04:39:18.319727 waagent[2109]: 2025-07-15T04:39:18.319690Z INFO EnvHandler ExtHandler Routes:None Jul 15 04:39:18.324352 waagent[2109]: 2025-07-15T04:39:18.324317Z INFO ExtHandler ExtHandler Jul 15 04:39:18.324553 waagent[2109]: 2025-07-15T04:39:18.324530Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 337b326b-9396-472b-ace3-81af038feb67 correlation 8639292c-526c-473a-a767-085aad9e9fcf created: 2025-07-15T04:38:08.104874Z] Jul 15 04:39:18.325056 waagent[2109]: 2025-07-15T04:39:18.325021Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 15 04:39:18.325474 waagent[2109]: 2025-07-15T04:39:18.325445Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jul 15 04:39:18.352599 waagent[2109]: 2025-07-15T04:39:18.352220Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jul 15 04:39:18.352599 waagent[2109]: Try `iptables -h' or 'iptables --help' for more information.) Jul 15 04:39:18.353126 waagent[2109]: 2025-07-15T04:39:18.353088Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: AA357F14-9A2D-4B7D-988D-1A11A0031B3D;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jul 15 04:39:18.353448 waagent[2109]: 2025-07-15T04:39:18.353410Z INFO MonitorHandler ExtHandler Network interfaces: Jul 15 04:39:18.353448 waagent[2109]: Executing ['ip', '-a', '-o', 'link']: Jul 15 04:39:18.353448 waagent[2109]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 15 04:39:18.353448 waagent[2109]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7d:66:54 brd ff:ff:ff:ff:ff:ff Jul 15 04:39:18.353448 waagent[2109]: 3: enP61085s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7d:66:54 brd ff:ff:ff:ff:ff:ff\ altname enP61085p0s2 Jul 15 04:39:18.353448 waagent[2109]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 15 04:39:18.353448 waagent[2109]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 15 04:39:18.353448 waagent[2109]: 2: eth0 inet 10.200.20.37/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 15 04:39:18.353448 waagent[2109]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 15 04:39:18.353448 waagent[2109]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 15 04:39:18.353448 waagent[2109]: 2: eth0 inet6 fe80::222:48ff:fe7d:6654/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 15 04:39:18.353448 waagent[2109]: 3: enP61085s1 inet6 fe80::222:48ff:fe7d:6654/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 15 04:39:18.423315 waagent[2109]: 2025-07-15T04:39:18.423270Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jul 15 04:39:18.423315 waagent[2109]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 15 04:39:18.423315 waagent[2109]: pkts bytes target prot opt in out source destination Jul 15 04:39:18.423315 waagent[2109]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 15 04:39:18.423315 waagent[2109]: pkts bytes target prot opt in out source destination Jul 15 04:39:18.423315 waagent[2109]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 15 04:39:18.423315 waagent[2109]: pkts bytes target prot opt in out source destination Jul 15 04:39:18.423315 waagent[2109]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 15 04:39:18.423315 waagent[2109]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 15 04:39:18.423315 waagent[2109]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 15 04:39:18.426045 waagent[2109]: 2025-07-15T04:39:18.425773Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 15 04:39:18.426045 waagent[2109]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 15 04:39:18.426045 waagent[2109]: pkts bytes target prot opt in out source destination Jul 15 04:39:18.426045 waagent[2109]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 15 04:39:18.426045 waagent[2109]: pkts bytes target prot opt in out source destination Jul 15 04:39:18.426045 waagent[2109]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 15 04:39:18.426045 waagent[2109]: pkts bytes target prot opt in out source destination Jul 15 04:39:18.426045 waagent[2109]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 15 04:39:18.426045 waagent[2109]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 15 04:39:18.426045 waagent[2109]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 15 04:39:18.426045 waagent[2109]: 2025-07-15T04:39:18.425966Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jul 15 04:39:24.741240 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 15 04:39:24.742158 systemd[1]: Started sshd@0-10.200.20.37:22-10.200.16.10:56456.service - OpenSSH per-connection server daemon (10.200.16.10:56456). Jul 15 04:39:25.169984 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 04:39:25.171545 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:39:25.333190 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:39:25.338111 (kubelet)[2263]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 04:39:25.339132 sshd[2252]: Accepted publickey for core from 10.200.16.10 port 56456 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:39:25.341417 sshd-session[2252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:39:25.347048 systemd-logind[1861]: New session 3 of user core. Jul 15 04:39:25.351829 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 15 04:39:25.414374 kubelet[2263]: E0715 04:39:25.414306 2263 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 04:39:25.417229 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 04:39:25.417338 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 04:39:25.417612 systemd[1]: kubelet.service: Consumed 109ms CPU time, 106.4M memory peak. Jul 15 04:39:25.769077 systemd[1]: Started sshd@1-10.200.20.37:22-10.200.16.10:56468.service - OpenSSH per-connection server daemon (10.200.16.10:56468). Jul 15 04:39:26.262656 sshd[2272]: Accepted publickey for core from 10.200.16.10 port 56468 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:39:26.263789 sshd-session[2272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:39:26.267256 systemd-logind[1861]: New session 4 of user core. Jul 15 04:39:26.281848 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 15 04:39:26.614812 sshd[2275]: Connection closed by 10.200.16.10 port 56468 Jul 15 04:39:26.618504 sshd-session[2272]: pam_unix(sshd:session): session closed for user core Jul 15 04:39:26.621414 systemd[1]: sshd@1-10.200.20.37:22-10.200.16.10:56468.service: Deactivated successfully. Jul 15 04:39:26.622974 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 04:39:26.623791 systemd-logind[1861]: Session 4 logged out. Waiting for processes to exit. Jul 15 04:39:26.625046 systemd-logind[1861]: Removed session 4. Jul 15 04:39:26.702971 systemd[1]: Started sshd@2-10.200.20.37:22-10.200.16.10:56474.service - OpenSSH per-connection server daemon (10.200.16.10:56474). Jul 15 04:39:27.198939 sshd[2281]: Accepted publickey for core from 10.200.16.10 port 56474 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:39:27.200042 sshd-session[2281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:39:27.203739 systemd-logind[1861]: New session 5 of user core. Jul 15 04:39:27.210844 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 15 04:39:27.556929 sshd[2284]: Connection closed by 10.200.16.10 port 56474 Jul 15 04:39:27.557546 sshd-session[2281]: pam_unix(sshd:session): session closed for user core Jul 15 04:39:27.560755 systemd[1]: sshd@2-10.200.20.37:22-10.200.16.10:56474.service: Deactivated successfully. Jul 15 04:39:27.562190 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 04:39:27.562775 systemd-logind[1861]: Session 5 logged out. Waiting for processes to exit. Jul 15 04:39:27.564051 systemd-logind[1861]: Removed session 5. Jul 15 04:39:27.646112 systemd[1]: Started sshd@3-10.200.20.37:22-10.200.16.10:56476.service - OpenSSH per-connection server daemon (10.200.16.10:56476). Jul 15 04:39:28.130634 sshd[2290]: Accepted publickey for core from 10.200.16.10 port 56476 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:39:28.131680 sshd-session[2290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:39:28.135155 systemd-logind[1861]: New session 6 of user core. Jul 15 04:39:28.142835 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 15 04:39:28.475166 sshd[2293]: Connection closed by 10.200.16.10 port 56476 Jul 15 04:39:28.474958 sshd-session[2290]: pam_unix(sshd:session): session closed for user core Jul 15 04:39:28.478421 systemd[1]: sshd@3-10.200.20.37:22-10.200.16.10:56476.service: Deactivated successfully. Jul 15 04:39:28.479752 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 04:39:28.481099 systemd-logind[1861]: Session 6 logged out. Waiting for processes to exit. Jul 15 04:39:28.482426 systemd-logind[1861]: Removed session 6. Jul 15 04:39:28.566900 systemd[1]: Started sshd@4-10.200.20.37:22-10.200.16.10:56486.service - OpenSSH per-connection server daemon (10.200.16.10:56486). Jul 15 04:39:29.059864 sshd[2299]: Accepted publickey for core from 10.200.16.10 port 56486 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:39:29.060912 sshd-session[2299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:39:29.064495 systemd-logind[1861]: New session 7 of user core. Jul 15 04:39:29.074823 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 15 04:39:29.449545 sudo[2303]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 15 04:39:29.449781 sudo[2303]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 04:39:29.459276 sudo[2303]: pam_unix(sudo:session): session closed for user root Jul 15 04:39:29.544674 sshd[2302]: Connection closed by 10.200.16.10 port 56486 Jul 15 04:39:29.545311 sshd-session[2299]: pam_unix(sshd:session): session closed for user core Jul 15 04:39:29.548742 systemd[1]: sshd@4-10.200.20.37:22-10.200.16.10:56486.service: Deactivated successfully. Jul 15 04:39:29.550286 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 04:39:29.551100 systemd-logind[1861]: Session 7 logged out. Waiting for processes to exit. Jul 15 04:39:29.552412 systemd-logind[1861]: Removed session 7. Jul 15 04:39:29.630310 systemd[1]: Started sshd@5-10.200.20.37:22-10.200.16.10:56488.service - OpenSSH per-connection server daemon (10.200.16.10:56488). Jul 15 04:39:30.110372 sshd[2309]: Accepted publickey for core from 10.200.16.10 port 56488 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:39:30.111440 sshd-session[2309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:39:30.114887 systemd-logind[1861]: New session 8 of user core. Jul 15 04:39:30.121975 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 15 04:39:30.378954 sudo[2314]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 15 04:39:30.379167 sudo[2314]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 04:39:30.386581 sudo[2314]: pam_unix(sudo:session): session closed for user root Jul 15 04:39:30.390289 sudo[2313]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 15 04:39:30.390492 sudo[2313]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 04:39:30.398080 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 04:39:30.424559 augenrules[2336]: No rules Jul 15 04:39:30.425746 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 04:39:30.425939 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 04:39:30.427771 sudo[2313]: pam_unix(sudo:session): session closed for user root Jul 15 04:39:30.505108 sshd[2312]: Connection closed by 10.200.16.10 port 56488 Jul 15 04:39:30.505641 sshd-session[2309]: pam_unix(sshd:session): session closed for user core Jul 15 04:39:30.508399 systemd-logind[1861]: Session 8 logged out. Waiting for processes to exit. Jul 15 04:39:30.508533 systemd[1]: sshd@5-10.200.20.37:22-10.200.16.10:56488.service: Deactivated successfully. Jul 15 04:39:30.510230 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 04:39:30.511919 systemd-logind[1861]: Removed session 8. Jul 15 04:39:30.590076 systemd[1]: Started sshd@6-10.200.20.37:22-10.200.16.10:51758.service - OpenSSH per-connection server daemon (10.200.16.10:51758). Jul 15 04:39:31.045613 sshd[2345]: Accepted publickey for core from 10.200.16.10 port 51758 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:39:31.046634 sshd-session[2345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:39:31.050265 systemd-logind[1861]: New session 9 of user core. Jul 15 04:39:31.060849 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 15 04:39:31.302393 sudo[2349]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 04:39:31.302614 sudo[2349]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 04:39:32.390021 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 15 04:39:32.400079 (dockerd)[2366]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 15 04:39:33.010423 dockerd[2366]: time="2025-07-15T04:39:33.010366720Z" level=info msg="Starting up" Jul 15 04:39:33.011103 dockerd[2366]: time="2025-07-15T04:39:33.011077512Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 15 04:39:33.019145 dockerd[2366]: time="2025-07-15T04:39:33.019059088Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 15 04:39:33.049687 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1688421060-merged.mount: Deactivated successfully. Jul 15 04:39:33.147262 dockerd[2366]: time="2025-07-15T04:39:33.147209448Z" level=info msg="Loading containers: start." Jul 15 04:39:33.184734 kernel: Initializing XFRM netlink socket Jul 15 04:39:33.468554 systemd-networkd[1618]: docker0: Link UP Jul 15 04:39:33.492382 dockerd[2366]: time="2025-07-15T04:39:33.492291816Z" level=info msg="Loading containers: done." Jul 15 04:39:33.517901 dockerd[2366]: time="2025-07-15T04:39:33.517855176Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 04:39:33.518074 dockerd[2366]: time="2025-07-15T04:39:33.517946712Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 15 04:39:33.518074 dockerd[2366]: time="2025-07-15T04:39:33.518040416Z" level=info msg="Initializing buildkit" Jul 15 04:39:33.587098 dockerd[2366]: time="2025-07-15T04:39:33.587055792Z" level=info msg="Completed buildkit initialization" Jul 15 04:39:33.592361 dockerd[2366]: time="2025-07-15T04:39:33.592242680Z" level=info msg="Daemon has completed initialization" Jul 15 04:39:33.592533 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 15 04:39:33.593237 dockerd[2366]: time="2025-07-15T04:39:33.592559128Z" level=info msg="API listen on /run/docker.sock" Jul 15 04:39:34.047058 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1893344052-merged.mount: Deactivated successfully. Jul 15 04:39:34.332578 containerd[1881]: time="2025-07-15T04:39:34.332319424Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 15 04:39:35.206509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount881996414.mount: Deactivated successfully. Jul 15 04:39:35.667690 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 15 04:39:35.669260 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:39:36.063123 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:39:36.068951 (kubelet)[2608]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 04:39:36.094847 kubelet[2608]: E0715 04:39:36.094707 2608 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 04:39:36.097010 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 04:39:36.097123 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 04:39:36.097387 systemd[1]: kubelet.service: Consumed 105ms CPU time, 104.7M memory peak. Jul 15 04:39:36.937628 containerd[1881]: time="2025-07-15T04:39:36.937566448Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:39:36.942131 containerd[1881]: time="2025-07-15T04:39:36.942094888Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=27351716" Jul 15 04:39:36.946734 containerd[1881]: time="2025-07-15T04:39:36.946591256Z" level=info msg="ImageCreate event name:\"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:39:36.951197 containerd[1881]: time="2025-07-15T04:39:36.951174032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:39:36.952288 containerd[1881]: time="2025-07-15T04:39:36.952250968Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"27348516\" in 2.619895544s" Jul 15 04:39:36.952423 containerd[1881]: time="2025-07-15T04:39:36.952277688Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\"" Jul 15 04:39:36.959278 containerd[1881]: time="2025-07-15T04:39:36.959244416Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 15 04:39:37.559779 chronyd[1839]: Selected source PHC0 Jul 15 04:39:38.178221 containerd[1881]: time="2025-07-15T04:39:38.178136648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:39:38.182262 containerd[1881]: time="2025-07-15T04:39:38.182233772Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=23537623" Jul 15 04:39:38.185784 containerd[1881]: time="2025-07-15T04:39:38.185761576Z" level=info msg="ImageCreate event name:\"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:39:38.192664 containerd[1881]: time="2025-07-15T04:39:38.192633764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:39:38.193257 containerd[1881]: time="2025-07-15T04:39:38.193087253Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"25092541\" in 1.233819053s" Jul 15 04:39:38.193257 containerd[1881]: time="2025-07-15T04:39:38.193115234Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\"" Jul 15 04:39:38.193900 containerd[1881]: time="2025-07-15T04:39:38.193804852Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 15 04:39:39.269741 containerd[1881]: time="2025-07-15T04:39:39.269644249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:39:39.272615 containerd[1881]: time="2025-07-15T04:39:39.272449537Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=18293515" Jul 15 04:39:39.276597 containerd[1881]: time="2025-07-15T04:39:39.276574233Z" level=info msg="ImageCreate event name:\"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:39:39.282722 containerd[1881]: time="2025-07-15T04:39:39.282687848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:39:39.283412 containerd[1881]: time="2025-07-15T04:39:39.283212848Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"19848451\" in 1.089380104s" Jul 15 04:39:39.283412 containerd[1881]: time="2025-07-15T04:39:39.283239272Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\"" Jul 15 04:39:39.283662 containerd[1881]: time="2025-07-15T04:39:39.283639088Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 15 04:39:40.889253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount801615651.mount: Deactivated successfully. Jul 15 04:39:41.166346 containerd[1881]: time="2025-07-15T04:39:41.166132115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:39:41.168819 containerd[1881]: time="2025-07-15T04:39:41.168786803Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=28199472" Jul 15 04:39:41.173056 containerd[1881]: time="2025-07-15T04:39:41.173017611Z" level=info msg="ImageCreate event name:\"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:39:41.177340 containerd[1881]: time="2025-07-15T04:39:41.177297779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:39:41.177541 containerd[1881]: time="2025-07-15T04:39:41.177512123Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"28198491\" in 1.893845971s" Jul 15 04:39:41.177541 containerd[1881]: time="2025-07-15T04:39:41.177540963Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jul 15 04:39:41.178105 containerd[1881]: time="2025-07-15T04:39:41.178082923Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 15 04:39:41.895099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount257386366.mount: Deactivated successfully. Jul 15 04:39:42.983554 containerd[1881]: time="2025-07-15T04:39:42.983502195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:39:42.986292 containerd[1881]: time="2025-07-15T04:39:42.986117475Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Jul 15 04:39:42.993629 containerd[1881]: time="2025-07-15T04:39:42.993607251Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:39:42.997962 containerd[1881]: time="2025-07-15T04:39:42.997934211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:39:42.998619 containerd[1881]: time="2025-07-15T04:39:42.998517283Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.820406728s" Jul 15 04:39:42.998619 containerd[1881]: time="2025-07-15T04:39:42.998543131Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jul 15 04:39:42.999316 containerd[1881]: time="2025-07-15T04:39:42.999167715Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 04:39:43.599571 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4018646671.mount: Deactivated successfully. Jul 15 04:39:43.629782 containerd[1881]: time="2025-07-15T04:39:43.629740715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 04:39:43.633284 containerd[1881]: time="2025-07-15T04:39:43.633154763Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 15 04:39:43.639433 containerd[1881]: time="2025-07-15T04:39:43.639404587Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 04:39:43.649855 containerd[1881]: time="2025-07-15T04:39:43.649805699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 04:39:43.650502 containerd[1881]: time="2025-07-15T04:39:43.650197803Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 651.00504ms" Jul 15 04:39:43.650502 containerd[1881]: time="2025-07-15T04:39:43.650225107Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 15 04:39:43.650856 containerd[1881]: time="2025-07-15T04:39:43.650838707Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 15 04:39:44.342575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1622809814.mount: Deactivated successfully. Jul 15 04:39:46.287160 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 15 04:39:46.289876 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:39:46.390091 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:39:46.394104 (kubelet)[2778]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 04:39:46.514999 kubelet[2778]: E0715 04:39:46.514929 2778 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 04:39:46.517241 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 04:39:46.517353 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 04:39:46.517668 systemd[1]: kubelet.service: Consumed 103ms CPU time, 107.2M memory peak. Jul 15 04:39:46.871192 containerd[1881]: time="2025-07-15T04:39:46.870862863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:39:46.876114 containerd[1881]: time="2025-07-15T04:39:46.876085835Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334599" Jul 15 04:39:46.878742 containerd[1881]: time="2025-07-15T04:39:46.878703485Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:39:46.885160 containerd[1881]: time="2025-07-15T04:39:46.885128543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:39:46.885663 containerd[1881]: time="2025-07-15T04:39:46.885497099Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 3.234554248s" Jul 15 04:39:46.885663 containerd[1881]: time="2025-07-15T04:39:46.885521916Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jul 15 04:39:50.320327 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:39:50.320688 systemd[1]: kubelet.service: Consumed 103ms CPU time, 107.2M memory peak. Jul 15 04:39:50.322573 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:39:50.344863 systemd[1]: Reload requested from client PID 2811 ('systemctl') (unit session-9.scope)... Jul 15 04:39:50.344874 systemd[1]: Reloading... Jul 15 04:39:50.445789 zram_generator::config[2869]: No configuration found. Jul 15 04:39:50.505082 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 04:39:50.589621 systemd[1]: Reloading finished in 244 ms. Jul 15 04:39:50.632816 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 15 04:39:50.632877 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 15 04:39:50.633074 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:39:50.633126 systemd[1]: kubelet.service: Consumed 72ms CPU time, 95.2M memory peak. Jul 15 04:39:50.636038 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:39:50.864559 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:39:50.867119 (kubelet)[2925]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 04:39:50.999042 kubelet[2925]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 04:39:50.999042 kubelet[2925]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 04:39:50.999042 kubelet[2925]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 04:39:50.999384 kubelet[2925]: I0715 04:39:50.999067 2925 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 04:39:51.376732 kubelet[2925]: I0715 04:39:51.376296 2925 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 15 04:39:51.376732 kubelet[2925]: I0715 04:39:51.376326 2925 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 04:39:51.376732 kubelet[2925]: I0715 04:39:51.376492 2925 server.go:956] "Client rotation is on, will bootstrap in background" Jul 15 04:39:51.388477 kubelet[2925]: E0715 04:39:51.388439 2925 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 15 04:39:51.389849 kubelet[2925]: I0715 04:39:51.389825 2925 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 04:39:51.394556 kubelet[2925]: I0715 04:39:51.394483 2925 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 04:39:51.398420 kubelet[2925]: I0715 04:39:51.398347 2925 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 04:39:51.399474 kubelet[2925]: I0715 04:39:51.399141 2925 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 04:39:51.399474 kubelet[2925]: I0715 04:39:51.399172 2925 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4396.0.0-n-16ec4aa50e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 04:39:51.399474 kubelet[2925]: I0715 04:39:51.399304 2925 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 04:39:51.399474 kubelet[2925]: I0715 04:39:51.399311 2925 container_manager_linux.go:303] "Creating device plugin manager" Jul 15 04:39:51.399474 kubelet[2925]: I0715 04:39:51.399426 2925 state_mem.go:36] "Initialized new in-memory state store" Jul 15 04:39:51.401051 kubelet[2925]: I0715 04:39:51.401032 2925 kubelet.go:480] "Attempting to sync node with API server" Jul 15 04:39:51.401081 kubelet[2925]: I0715 04:39:51.401056 2925 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 04:39:51.401143 kubelet[2925]: I0715 04:39:51.401133 2925 kubelet.go:386] "Adding apiserver pod source" Jul 15 04:39:51.402667 kubelet[2925]: I0715 04:39:51.402646 2925 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 04:39:51.404895 kubelet[2925]: E0715 04:39:51.404524 2925 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4396.0.0-n-16ec4aa50e&limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 15 04:39:51.406091 kubelet[2925]: E0715 04:39:51.406066 2925 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 15 04:39:51.406252 kubelet[2925]: I0715 04:39:51.406239 2925 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 15 04:39:51.406669 kubelet[2925]: I0715 04:39:51.406652 2925 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 15 04:39:51.406794 kubelet[2925]: W0715 04:39:51.406784 2925 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 04:39:51.408973 kubelet[2925]: I0715 04:39:51.408952 2925 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 04:39:51.409041 kubelet[2925]: I0715 04:39:51.408988 2925 server.go:1289] "Started kubelet" Jul 15 04:39:51.409118 kubelet[2925]: I0715 04:39:51.409095 2925 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 04:39:51.409785 kubelet[2925]: I0715 04:39:51.409767 2925 server.go:317] "Adding debug handlers to kubelet server" Jul 15 04:39:51.411099 kubelet[2925]: I0715 04:39:51.410646 2925 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 04:39:51.411099 kubelet[2925]: I0715 04:39:51.410953 2925 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 04:39:51.411902 kubelet[2925]: E0715 04:39:51.411037 2925 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.37:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.37:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4396.0.0-n-16ec4aa50e.185252eefd7b7c92 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4396.0.0-n-16ec4aa50e,UID:ci-4396.0.0-n-16ec4aa50e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4396.0.0-n-16ec4aa50e,},FirstTimestamp:2025-07-15 04:39:51.408966802 +0000 UTC m=+0.539046469,LastTimestamp:2025-07-15 04:39:51.408966802 +0000 UTC m=+0.539046469,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4396.0.0-n-16ec4aa50e,}" Jul 15 04:39:51.414016 kubelet[2925]: I0715 04:39:51.413780 2925 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 04:39:51.414016 kubelet[2925]: I0715 04:39:51.413938 2925 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 04:39:51.415229 kubelet[2925]: E0715 04:39:51.415029 2925 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 04:39:51.415229 kubelet[2925]: E0715 04:39:51.415086 2925 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-16ec4aa50e\" not found" Jul 15 04:39:51.415229 kubelet[2925]: I0715 04:39:51.415101 2925 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 04:39:51.415332 kubelet[2925]: I0715 04:39:51.415269 2925 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 04:39:51.415332 kubelet[2925]: I0715 04:39:51.415321 2925 reconciler.go:26] "Reconciler: start to sync state" Jul 15 04:39:51.416587 kubelet[2925]: E0715 04:39:51.415612 2925 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 15 04:39:51.416587 kubelet[2925]: E0715 04:39:51.416305 2925 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4396.0.0-n-16ec4aa50e?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="200ms" Jul 15 04:39:51.416767 kubelet[2925]: I0715 04:39:51.416689 2925 factory.go:223] Registration of the systemd container factory successfully Jul 15 04:39:51.416851 kubelet[2925]: I0715 04:39:51.416829 2925 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 04:39:51.417820 kubelet[2925]: I0715 04:39:51.417796 2925 factory.go:223] Registration of the containerd container factory successfully Jul 15 04:39:51.446581 kubelet[2925]: I0715 04:39:51.446534 2925 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 15 04:39:51.449040 kubelet[2925]: I0715 04:39:51.448936 2925 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 15 04:39:51.449040 kubelet[2925]: I0715 04:39:51.448998 2925 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 15 04:39:51.449040 kubelet[2925]: I0715 04:39:51.449029 2925 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 04:39:51.449040 kubelet[2925]: I0715 04:39:51.449035 2925 kubelet.go:2436] "Starting kubelet main sync loop" Jul 15 04:39:51.449161 kubelet[2925]: E0715 04:39:51.449065 2925 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 04:39:51.449773 kubelet[2925]: I0715 04:39:51.449751 2925 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 04:39:51.449773 kubelet[2925]: I0715 04:39:51.449763 2925 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 04:39:51.449773 kubelet[2925]: I0715 04:39:51.449778 2925 state_mem.go:36] "Initialized new in-memory state store" Jul 15 04:39:51.450400 kubelet[2925]: E0715 04:39:51.450376 2925 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 15 04:39:51.456381 kubelet[2925]: I0715 04:39:51.456359 2925 policy_none.go:49] "None policy: Start" Jul 15 04:39:51.456381 kubelet[2925]: I0715 04:39:51.456381 2925 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 04:39:51.456464 kubelet[2925]: I0715 04:39:51.456391 2925 state_mem.go:35] "Initializing new in-memory state store" Jul 15 04:39:51.465493 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 15 04:39:51.475449 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 15 04:39:51.478136 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 15 04:39:51.488490 kubelet[2925]: E0715 04:39:51.488434 2925 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 15 04:39:51.488800 kubelet[2925]: I0715 04:39:51.488754 2925 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 04:39:51.488872 kubelet[2925]: I0715 04:39:51.488770 2925 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 04:39:51.489242 kubelet[2925]: I0715 04:39:51.489211 2925 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 04:39:51.490753 kubelet[2925]: E0715 04:39:51.490700 2925 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 04:39:51.490822 kubelet[2925]: E0715 04:39:51.490774 2925 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4396.0.0-n-16ec4aa50e\" not found" Jul 15 04:39:51.564148 systemd[1]: Created slice kubepods-burstable-podd35f7e6674cfc69a5b5e2078ae4fc120.slice - libcontainer container kubepods-burstable-podd35f7e6674cfc69a5b5e2078ae4fc120.slice. Jul 15 04:39:51.573302 kubelet[2925]: E0715 04:39:51.573278 2925 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4396.0.0-n-16ec4aa50e\" not found" node="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:51.577252 systemd[1]: Created slice kubepods-burstable-pod45cc6c520dd75e7ef18337535c1dd4f6.slice - libcontainer container kubepods-burstable-pod45cc6c520dd75e7ef18337535c1dd4f6.slice. Jul 15 04:39:51.579257 kubelet[2925]: E0715 04:39:51.579064 2925 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4396.0.0-n-16ec4aa50e\" not found" node="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:51.590853 kubelet[2925]: I0715 04:39:51.590670 2925 kubelet_node_status.go:75] "Attempting to register node" node="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:51.591203 systemd[1]: Created slice kubepods-burstable-pod40adb79a314dcc91bb63c2601d6daaa5.slice - libcontainer container kubepods-burstable-pod40adb79a314dcc91bb63c2601d6daaa5.slice. Jul 15 04:39:51.592049 kubelet[2925]: E0715 04:39:51.591441 2925 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:51.592843 kubelet[2925]: E0715 04:39:51.592824 2925 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4396.0.0-n-16ec4aa50e\" not found" node="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:51.616068 kubelet[2925]: I0715 04:39:51.616003 2925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d35f7e6674cfc69a5b5e2078ae4fc120-ca-certs\") pod \"kube-apiserver-ci-4396.0.0-n-16ec4aa50e\" (UID: \"d35f7e6674cfc69a5b5e2078ae4fc120\") " pod="kube-system/kube-apiserver-ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:51.617311 kubelet[2925]: E0715 04:39:51.617275 2925 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4396.0.0-n-16ec4aa50e?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="400ms" Jul 15 04:39:51.716576 kubelet[2925]: I0715 04:39:51.716351 2925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d35f7e6674cfc69a5b5e2078ae4fc120-k8s-certs\") pod \"kube-apiserver-ci-4396.0.0-n-16ec4aa50e\" (UID: \"d35f7e6674cfc69a5b5e2078ae4fc120\") " pod="kube-system/kube-apiserver-ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:51.716576 kubelet[2925]: I0715 04:39:51.716418 2925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d35f7e6674cfc69a5b5e2078ae4fc120-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4396.0.0-n-16ec4aa50e\" (UID: \"d35f7e6674cfc69a5b5e2078ae4fc120\") " pod="kube-system/kube-apiserver-ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:51.716576 kubelet[2925]: I0715 04:39:51.716436 2925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/45cc6c520dd75e7ef18337535c1dd4f6-ca-certs\") pod \"kube-controller-manager-ci-4396.0.0-n-16ec4aa50e\" (UID: \"45cc6c520dd75e7ef18337535c1dd4f6\") " pod="kube-system/kube-controller-manager-ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:51.716576 kubelet[2925]: I0715 04:39:51.716448 2925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/45cc6c520dd75e7ef18337535c1dd4f6-flexvolume-dir\") pod \"kube-controller-manager-ci-4396.0.0-n-16ec4aa50e\" (UID: \"45cc6c520dd75e7ef18337535c1dd4f6\") " pod="kube-system/kube-controller-manager-ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:51.716576 kubelet[2925]: I0715 04:39:51.716457 2925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/45cc6c520dd75e7ef18337535c1dd4f6-k8s-certs\") pod \"kube-controller-manager-ci-4396.0.0-n-16ec4aa50e\" (UID: \"45cc6c520dd75e7ef18337535c1dd4f6\") " pod="kube-system/kube-controller-manager-ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:51.716857 kubelet[2925]: I0715 04:39:51.716489 2925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/45cc6c520dd75e7ef18337535c1dd4f6-kubeconfig\") pod \"kube-controller-manager-ci-4396.0.0-n-16ec4aa50e\" (UID: \"45cc6c520dd75e7ef18337535c1dd4f6\") " pod="kube-system/kube-controller-manager-ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:51.716857 kubelet[2925]: I0715 04:39:51.716813 2925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/45cc6c520dd75e7ef18337535c1dd4f6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4396.0.0-n-16ec4aa50e\" (UID: \"45cc6c520dd75e7ef18337535c1dd4f6\") " pod="kube-system/kube-controller-manager-ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:51.716857 kubelet[2925]: I0715 04:39:51.716827 2925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/40adb79a314dcc91bb63c2601d6daaa5-kubeconfig\") pod \"kube-scheduler-ci-4396.0.0-n-16ec4aa50e\" (UID: \"40adb79a314dcc91bb63c2601d6daaa5\") " pod="kube-system/kube-scheduler-ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:51.794118 kubelet[2925]: I0715 04:39:51.794089 2925 kubelet_node_status.go:75] "Attempting to register node" node="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:51.794587 kubelet[2925]: E0715 04:39:51.794562 2925 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:51.874737 containerd[1881]: time="2025-07-15T04:39:51.874674419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4396.0.0-n-16ec4aa50e,Uid:d35f7e6674cfc69a5b5e2078ae4fc120,Namespace:kube-system,Attempt:0,}" Jul 15 04:39:51.880659 containerd[1881]: time="2025-07-15T04:39:51.880548820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4396.0.0-n-16ec4aa50e,Uid:45cc6c520dd75e7ef18337535c1dd4f6,Namespace:kube-system,Attempt:0,}" Jul 15 04:39:51.894361 containerd[1881]: time="2025-07-15T04:39:51.894336070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4396.0.0-n-16ec4aa50e,Uid:40adb79a314dcc91bb63c2601d6daaa5,Namespace:kube-system,Attempt:0,}" Jul 15 04:39:52.018553 kubelet[2925]: E0715 04:39:52.018431 2925 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4396.0.0-n-16ec4aa50e?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="800ms" Jul 15 04:39:52.196681 kubelet[2925]: I0715 04:39:52.196629 2925 kubelet_node_status.go:75] "Attempting to register node" node="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:52.196982 kubelet[2925]: E0715 04:39:52.196954 2925 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:52.250931 kubelet[2925]: E0715 04:39:52.250891 2925 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 15 04:39:52.338030 kubelet[2925]: E0715 04:39:52.337990 2925 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 15 04:39:52.365937 kubelet[2925]: E0715 04:39:52.365836 2925 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.37:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.37:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4396.0.0-n-16ec4aa50e.185252eefd7b7c92 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4396.0.0-n-16ec4aa50e,UID:ci-4396.0.0-n-16ec4aa50e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4396.0.0-n-16ec4aa50e,},FirstTimestamp:2025-07-15 04:39:51.408966802 +0000 UTC m=+0.539046469,LastTimestamp:2025-07-15 04:39:51.408966802 +0000 UTC m=+0.539046469,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4396.0.0-n-16ec4aa50e,}" Jul 15 04:39:52.572265 kubelet[2925]: E0715 04:39:52.572226 2925 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 15 04:39:52.810011 containerd[1881]: time="2025-07-15T04:39:52.809972589Z" level=info msg="connecting to shim 54f5a734c3c605123890b97225bc8aa906e3bbfaa89e8c46e7f8dfdf7d0dd043" address="unix:///run/containerd/s/4ba12bdbd56a0c271c02101a4c34bf3749411263eed09118129aa0f3930eff6d" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:39:52.819084 kubelet[2925]: E0715 04:39:52.819047 2925 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4396.0.0-n-16ec4aa50e?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="1.6s" Jul 15 04:39:52.832280 containerd[1881]: time="2025-07-15T04:39:52.831962344Z" level=info msg="connecting to shim 21f63871b6cbc0c522d66bfb19f084fd91e4c2d55933241c9eff007166e25db6" address="unix:///run/containerd/s/6fba7304522754af512ca14a3dac1756c7d586cd8af38eac587c53d76106f43e" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:39:52.834861 systemd[1]: Started cri-containerd-54f5a734c3c605123890b97225bc8aa906e3bbfaa89e8c46e7f8dfdf7d0dd043.scope - libcontainer container 54f5a734c3c605123890b97225bc8aa906e3bbfaa89e8c46e7f8dfdf7d0dd043. Jul 15 04:39:52.839244 containerd[1881]: time="2025-07-15T04:39:52.839212516Z" level=info msg="connecting to shim 0fd19130e68441073b35de3a547460ff28e9524bd1b1a4704ce622b70ad52907" address="unix:///run/containerd/s/18a78f18fa0d32a9a9e8302cfa15c9ed50d29a5875847be80b3ae4290b4aab86" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:39:52.849868 systemd[1]: Started cri-containerd-21f63871b6cbc0c522d66bfb19f084fd91e4c2d55933241c9eff007166e25db6.scope - libcontainer container 21f63871b6cbc0c522d66bfb19f084fd91e4c2d55933241c9eff007166e25db6. Jul 15 04:39:52.858078 systemd[1]: Started cri-containerd-0fd19130e68441073b35de3a547460ff28e9524bd1b1a4704ce622b70ad52907.scope - libcontainer container 0fd19130e68441073b35de3a547460ff28e9524bd1b1a4704ce622b70ad52907. Jul 15 04:39:52.892266 containerd[1881]: time="2025-07-15T04:39:52.892224966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4396.0.0-n-16ec4aa50e,Uid:d35f7e6674cfc69a5b5e2078ae4fc120,Namespace:kube-system,Attempt:0,} returns sandbox id \"54f5a734c3c605123890b97225bc8aa906e3bbfaa89e8c46e7f8dfdf7d0dd043\"" Jul 15 04:39:52.906509 containerd[1881]: time="2025-07-15T04:39:52.906471150Z" level=info msg="CreateContainer within sandbox \"54f5a734c3c605123890b97225bc8aa906e3bbfaa89e8c46e7f8dfdf7d0dd043\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 04:39:52.911108 containerd[1881]: time="2025-07-15T04:39:52.911054454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4396.0.0-n-16ec4aa50e,Uid:45cc6c520dd75e7ef18337535c1dd4f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"0fd19130e68441073b35de3a547460ff28e9524bd1b1a4704ce622b70ad52907\"" Jul 15 04:39:52.918088 containerd[1881]: time="2025-07-15T04:39:52.918012216Z" level=info msg="CreateContainer within sandbox \"0fd19130e68441073b35de3a547460ff28e9524bd1b1a4704ce622b70ad52907\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 04:39:52.918489 containerd[1881]: time="2025-07-15T04:39:52.918449334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4396.0.0-n-16ec4aa50e,Uid:40adb79a314dcc91bb63c2601d6daaa5,Namespace:kube-system,Attempt:0,} returns sandbox id \"21f63871b6cbc0c522d66bfb19f084fd91e4c2d55933241c9eff007166e25db6\"" Jul 15 04:39:52.926533 containerd[1881]: time="2025-07-15T04:39:52.926396944Z" level=info msg="CreateContainer within sandbox \"21f63871b6cbc0c522d66bfb19f084fd91e4c2d55933241c9eff007166e25db6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 04:39:52.965062 containerd[1881]: time="2025-07-15T04:39:52.965026614Z" level=info msg="Container 3344560f3f2a4b4d346fc682d382842e56f0117e7d95fa2c297534d8df7a9af4: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:39:52.974254 kubelet[2925]: E0715 04:39:52.974213 2925 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4396.0.0-n-16ec4aa50e&limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 15 04:39:52.977753 containerd[1881]: time="2025-07-15T04:39:52.977342553Z" level=info msg="Container 6c725b7ef5814c58687449fa27d76bf6f5fd87217d580cacf96c8f6872a608aa: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:39:52.995822 containerd[1881]: time="2025-07-15T04:39:52.995789460Z" level=info msg="CreateContainer within sandbox \"54f5a734c3c605123890b97225bc8aa906e3bbfaa89e8c46e7f8dfdf7d0dd043\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3344560f3f2a4b4d346fc682d382842e56f0117e7d95fa2c297534d8df7a9af4\"" Jul 15 04:39:52.996481 containerd[1881]: time="2025-07-15T04:39:52.996450753Z" level=info msg="StartContainer for \"3344560f3f2a4b4d346fc682d382842e56f0117e7d95fa2c297534d8df7a9af4\"" Jul 15 04:39:52.997392 containerd[1881]: time="2025-07-15T04:39:52.997367070Z" level=info msg="connecting to shim 3344560f3f2a4b4d346fc682d382842e56f0117e7d95fa2c297534d8df7a9af4" address="unix:///run/containerd/s/4ba12bdbd56a0c271c02101a4c34bf3749411263eed09118129aa0f3930eff6d" protocol=ttrpc version=3 Jul 15 04:39:52.999989 containerd[1881]: time="2025-07-15T04:39:52.999624853Z" level=info msg="Container f277585484955517ea3eae0fef4322fed4ebdf86d7db1a563d4fd43ed5fa35ee: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:39:53.000061 kubelet[2925]: I0715 04:39:52.999750 2925 kubelet_node_status.go:75] "Attempting to register node" node="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:53.000091 kubelet[2925]: E0715 04:39:53.000054 2925 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:53.015855 systemd[1]: Started cri-containerd-3344560f3f2a4b4d346fc682d382842e56f0117e7d95fa2c297534d8df7a9af4.scope - libcontainer container 3344560f3f2a4b4d346fc682d382842e56f0117e7d95fa2c297534d8df7a9af4. Jul 15 04:39:53.030113 containerd[1881]: time="2025-07-15T04:39:53.030081802Z" level=info msg="CreateContainer within sandbox \"0fd19130e68441073b35de3a547460ff28e9524bd1b1a4704ce622b70ad52907\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f277585484955517ea3eae0fef4322fed4ebdf86d7db1a563d4fd43ed5fa35ee\"" Jul 15 04:39:53.031069 containerd[1881]: time="2025-07-15T04:39:53.031046920Z" level=info msg="StartContainer for \"f277585484955517ea3eae0fef4322fed4ebdf86d7db1a563d4fd43ed5fa35ee\"" Jul 15 04:39:53.036107 containerd[1881]: time="2025-07-15T04:39:53.036068406Z" level=info msg="CreateContainer within sandbox \"21f63871b6cbc0c522d66bfb19f084fd91e4c2d55933241c9eff007166e25db6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6c725b7ef5814c58687449fa27d76bf6f5fd87217d580cacf96c8f6872a608aa\"" Jul 15 04:39:53.036368 containerd[1881]: time="2025-07-15T04:39:53.036345759Z" level=info msg="connecting to shim f277585484955517ea3eae0fef4322fed4ebdf86d7db1a563d4fd43ed5fa35ee" address="unix:///run/containerd/s/18a78f18fa0d32a9a9e8302cfa15c9ed50d29a5875847be80b3ae4290b4aab86" protocol=ttrpc version=3 Jul 15 04:39:53.039160 containerd[1881]: time="2025-07-15T04:39:53.039138831Z" level=info msg="StartContainer for \"6c725b7ef5814c58687449fa27d76bf6f5fd87217d580cacf96c8f6872a608aa\"" Jul 15 04:39:53.040507 containerd[1881]: time="2025-07-15T04:39:53.040485945Z" level=info msg="connecting to shim 6c725b7ef5814c58687449fa27d76bf6f5fd87217d580cacf96c8f6872a608aa" address="unix:///run/containerd/s/6fba7304522754af512ca14a3dac1756c7d586cd8af38eac587c53d76106f43e" protocol=ttrpc version=3 Jul 15 04:39:53.061876 systemd[1]: Started cri-containerd-f277585484955517ea3eae0fef4322fed4ebdf86d7db1a563d4fd43ed5fa35ee.scope - libcontainer container f277585484955517ea3eae0fef4322fed4ebdf86d7db1a563d4fd43ed5fa35ee. Jul 15 04:39:53.064375 containerd[1881]: time="2025-07-15T04:39:53.064317366Z" level=info msg="StartContainer for \"3344560f3f2a4b4d346fc682d382842e56f0117e7d95fa2c297534d8df7a9af4\" returns successfully" Jul 15 04:39:53.070793 systemd[1]: Started cri-containerd-6c725b7ef5814c58687449fa27d76bf6f5fd87217d580cacf96c8f6872a608aa.scope - libcontainer container 6c725b7ef5814c58687449fa27d76bf6f5fd87217d580cacf96c8f6872a608aa. Jul 15 04:39:53.128161 containerd[1881]: time="2025-07-15T04:39:53.128099690Z" level=info msg="StartContainer for \"6c725b7ef5814c58687449fa27d76bf6f5fd87217d580cacf96c8f6872a608aa\" returns successfully" Jul 15 04:39:53.128357 containerd[1881]: time="2025-07-15T04:39:53.128146396Z" level=info msg="StartContainer for \"f277585484955517ea3eae0fef4322fed4ebdf86d7db1a563d4fd43ed5fa35ee\" returns successfully" Jul 15 04:39:53.458688 kubelet[2925]: E0715 04:39:53.458566 2925 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4396.0.0-n-16ec4aa50e\" not found" node="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:53.466282 kubelet[2925]: E0715 04:39:53.466258 2925 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4396.0.0-n-16ec4aa50e\" not found" node="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:53.471870 kubelet[2925]: E0715 04:39:53.471814 2925 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4396.0.0-n-16ec4aa50e\" not found" node="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:54.470335 kubelet[2925]: E0715 04:39:54.470188 2925 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4396.0.0-n-16ec4aa50e\" not found" node="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:54.471916 kubelet[2925]: E0715 04:39:54.471812 2925 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4396.0.0-n-16ec4aa50e\" not found" node="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:54.561064 kubelet[2925]: E0715 04:39:54.561030 2925 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4396.0.0-n-16ec4aa50e\" not found" node="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:54.603419 kubelet[2925]: I0715 04:39:54.603382 2925 kubelet_node_status.go:75] "Attempting to register node" node="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:54.615505 kubelet[2925]: I0715 04:39:54.615484 2925 kubelet_node_status.go:78] "Successfully registered node" node="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:54.615505 kubelet[2925]: E0715 04:39:54.615534 2925 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4396.0.0-n-16ec4aa50e\": node \"ci-4396.0.0-n-16ec4aa50e\" not found" Jul 15 04:39:54.623745 kubelet[2925]: E0715 04:39:54.623703 2925 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-16ec4aa50e\" not found" Jul 15 04:39:54.724671 kubelet[2925]: E0715 04:39:54.724551 2925 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-16ec4aa50e\" not found" Jul 15 04:39:54.825180 kubelet[2925]: E0715 04:39:54.825138 2925 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-16ec4aa50e\" not found" Jul 15 04:39:54.925926 kubelet[2925]: E0715 04:39:54.925879 2925 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-16ec4aa50e\" not found" Jul 15 04:39:55.026669 kubelet[2925]: E0715 04:39:55.026544 2925 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-16ec4aa50e\" not found" Jul 15 04:39:55.127079 kubelet[2925]: E0715 04:39:55.127037 2925 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-16ec4aa50e\" not found" Jul 15 04:39:55.227873 kubelet[2925]: E0715 04:39:55.227825 2925 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-16ec4aa50e\" not found" Jul 15 04:39:55.328333 kubelet[2925]: E0715 04:39:55.328292 2925 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-16ec4aa50e\" not found" Jul 15 04:39:55.429239 kubelet[2925]: E0715 04:39:55.429184 2925 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-16ec4aa50e\" not found" Jul 15 04:39:55.472682 kubelet[2925]: E0715 04:39:55.471832 2925 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4396.0.0-n-16ec4aa50e\" not found" node="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:55.529420 kubelet[2925]: E0715 04:39:55.529369 2925 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-16ec4aa50e\" not found" Jul 15 04:39:55.616556 kubelet[2925]: I0715 04:39:55.616446 2925 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:55.625874 kubelet[2925]: I0715 04:39:55.625847 2925 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 15 04:39:55.625997 kubelet[2925]: I0715 04:39:55.625953 2925 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:55.632497 kubelet[2925]: I0715 04:39:55.632273 2925 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 15 04:39:55.632497 kubelet[2925]: I0715 04:39:55.632355 2925 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:55.638352 kubelet[2925]: I0715 04:39:55.638336 2925 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 15 04:39:56.407486 kubelet[2925]: I0715 04:39:56.407247 2925 apiserver.go:52] "Watching apiserver" Jul 15 04:39:56.414381 systemd[1]: Reload requested from client PID 3204 ('systemctl') (unit session-9.scope)... Jul 15 04:39:56.414394 systemd[1]: Reloading... Jul 15 04:39:56.415611 kubelet[2925]: I0715 04:39:56.415587 2925 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 04:39:56.485922 zram_generator::config[3253]: No configuration found. Jul 15 04:39:56.549015 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 04:39:56.639704 systemd[1]: Reloading finished in 224 ms. Jul 15 04:39:56.669872 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:39:56.687547 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 04:39:56.687801 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:39:56.687852 systemd[1]: kubelet.service: Consumed 773ms CPU time, 126.4M memory peak. Jul 15 04:39:56.689272 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:39:56.790742 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:39:56.798208 (kubelet)[3314]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 04:39:56.828784 kubelet[3314]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 04:39:56.828784 kubelet[3314]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 04:39:56.828784 kubelet[3314]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 04:39:56.829042 kubelet[3314]: I0715 04:39:56.828836 3314 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 04:39:56.832772 kubelet[3314]: I0715 04:39:56.832740 3314 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 15 04:39:56.832772 kubelet[3314]: I0715 04:39:56.832765 3314 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 04:39:56.832977 kubelet[3314]: I0715 04:39:56.832947 3314 server.go:956] "Client rotation is on, will bootstrap in background" Jul 15 04:39:56.833864 kubelet[3314]: I0715 04:39:56.833847 3314 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 15 04:39:56.835365 kubelet[3314]: I0715 04:39:56.835337 3314 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 04:39:56.840445 kubelet[3314]: I0715 04:39:56.840419 3314 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 04:39:56.842889 kubelet[3314]: I0715 04:39:56.842843 3314 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 04:39:56.843043 kubelet[3314]: I0715 04:39:56.843017 3314 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 04:39:56.843148 kubelet[3314]: I0715 04:39:56.843042 3314 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4396.0.0-n-16ec4aa50e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 04:39:56.843213 kubelet[3314]: I0715 04:39:56.843153 3314 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 04:39:56.843213 kubelet[3314]: I0715 04:39:56.843161 3314 container_manager_linux.go:303] "Creating device plugin manager" Jul 15 04:39:56.843213 kubelet[3314]: I0715 04:39:56.843192 3314 state_mem.go:36] "Initialized new in-memory state store" Jul 15 04:39:56.843751 kubelet[3314]: I0715 04:39:56.843294 3314 kubelet.go:480] "Attempting to sync node with API server" Jul 15 04:39:56.843751 kubelet[3314]: I0715 04:39:56.843305 3314 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 04:39:56.843751 kubelet[3314]: I0715 04:39:56.843325 3314 kubelet.go:386] "Adding apiserver pod source" Jul 15 04:39:56.843751 kubelet[3314]: I0715 04:39:56.843334 3314 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 04:39:56.844294 kubelet[3314]: I0715 04:39:56.844199 3314 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 15 04:39:56.844741 kubelet[3314]: I0715 04:39:56.844701 3314 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 15 04:39:56.846337 kubelet[3314]: I0715 04:39:56.846319 3314 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 04:39:56.846474 kubelet[3314]: I0715 04:39:56.846464 3314 server.go:1289] "Started kubelet" Jul 15 04:39:56.848059 kubelet[3314]: I0715 04:39:56.848035 3314 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 04:39:56.859574 kubelet[3314]: I0715 04:39:56.859403 3314 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 04:39:56.860502 kubelet[3314]: I0715 04:39:56.860430 3314 server.go:317] "Adding debug handlers to kubelet server" Jul 15 04:39:56.866871 kubelet[3314]: I0715 04:39:56.866124 3314 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 04:39:56.867468 kubelet[3314]: I0715 04:39:56.867412 3314 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 04:39:56.867785 kubelet[3314]: I0715 04:39:56.867763 3314 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 04:39:56.868171 kubelet[3314]: I0715 04:39:56.868123 3314 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 15 04:39:56.868872 kubelet[3314]: I0715 04:39:56.868859 3314 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 04:39:56.869588 kubelet[3314]: I0715 04:39:56.869545 3314 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 15 04:39:56.869588 kubelet[3314]: I0715 04:39:56.869576 3314 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 15 04:39:56.869588 kubelet[3314]: I0715 04:39:56.869596 3314 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 04:39:56.869690 kubelet[3314]: I0715 04:39:56.869606 3314 kubelet.go:2436] "Starting kubelet main sync loop" Jul 15 04:39:56.869690 kubelet[3314]: E0715 04:39:56.869654 3314 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 04:39:56.870420 kubelet[3314]: I0715 04:39:56.870307 3314 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 04:39:56.870921 kubelet[3314]: I0715 04:39:56.870905 3314 reconciler.go:26] "Reconciler: start to sync state" Jul 15 04:39:56.872764 kubelet[3314]: E0715 04:39:56.872143 3314 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 04:39:56.877167 kubelet[3314]: I0715 04:39:56.877147 3314 factory.go:223] Registration of the systemd container factory successfully Jul 15 04:39:56.879695 kubelet[3314]: I0715 04:39:56.879670 3314 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 04:39:56.881479 kubelet[3314]: I0715 04:39:56.881443 3314 factory.go:223] Registration of the containerd container factory successfully Jul 15 04:39:56.937334 kubelet[3314]: I0715 04:39:56.937244 3314 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 04:39:56.937334 kubelet[3314]: I0715 04:39:56.937262 3314 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 04:39:56.937334 kubelet[3314]: I0715 04:39:56.937286 3314 state_mem.go:36] "Initialized new in-memory state store" Jul 15 04:39:56.937478 kubelet[3314]: I0715 04:39:56.937409 3314 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 04:39:56.937478 kubelet[3314]: I0715 04:39:56.937418 3314 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 04:39:56.937478 kubelet[3314]: I0715 04:39:56.937432 3314 policy_none.go:49] "None policy: Start" Jul 15 04:39:56.937478 kubelet[3314]: I0715 04:39:56.937443 3314 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 04:39:56.937478 kubelet[3314]: I0715 04:39:56.937450 3314 state_mem.go:35] "Initializing new in-memory state store" Jul 15 04:39:56.937549 kubelet[3314]: I0715 04:39:56.937510 3314 state_mem.go:75] "Updated machine memory state" Jul 15 04:39:56.941409 kubelet[3314]: E0715 04:39:56.941386 3314 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 15 04:39:56.941558 kubelet[3314]: I0715 04:39:56.941541 3314 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 04:39:56.941581 kubelet[3314]: I0715 04:39:56.941556 3314 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 04:39:56.942979 kubelet[3314]: I0715 04:39:56.942959 3314 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 04:39:56.943997 kubelet[3314]: E0715 04:39:56.943907 3314 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 04:39:56.970449 kubelet[3314]: I0715 04:39:56.970409 3314 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:56.971632 kubelet[3314]: I0715 04:39:56.971616 3314 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:56.972616 kubelet[3314]: I0715 04:39:56.972599 3314 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:56.974660 kubelet[3314]: I0715 04:39:56.971136 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/45cc6c520dd75e7ef18337535c1dd4f6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4396.0.0-n-16ec4aa50e\" (UID: \"45cc6c520dd75e7ef18337535c1dd4f6\") " pod="kube-system/kube-controller-manager-ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:56.974660 kubelet[3314]: I0715 04:39:56.974646 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d35f7e6674cfc69a5b5e2078ae4fc120-k8s-certs\") pod \"kube-apiserver-ci-4396.0.0-n-16ec4aa50e\" (UID: \"d35f7e6674cfc69a5b5e2078ae4fc120\") " pod="kube-system/kube-apiserver-ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:56.974660 kubelet[3314]: I0715 04:39:56.974661 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/40adb79a314dcc91bb63c2601d6daaa5-kubeconfig\") pod \"kube-scheduler-ci-4396.0.0-n-16ec4aa50e\" (UID: \"40adb79a314dcc91bb63c2601d6daaa5\") " pod="kube-system/kube-scheduler-ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:56.974778 kubelet[3314]: I0715 04:39:56.974673 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d35f7e6674cfc69a5b5e2078ae4fc120-ca-certs\") pod \"kube-apiserver-ci-4396.0.0-n-16ec4aa50e\" (UID: \"d35f7e6674cfc69a5b5e2078ae4fc120\") " pod="kube-system/kube-apiserver-ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:56.974778 kubelet[3314]: I0715 04:39:56.974685 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d35f7e6674cfc69a5b5e2078ae4fc120-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4396.0.0-n-16ec4aa50e\" (UID: \"d35f7e6674cfc69a5b5e2078ae4fc120\") " pod="kube-system/kube-apiserver-ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:56.974778 kubelet[3314]: I0715 04:39:56.974696 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/45cc6c520dd75e7ef18337535c1dd4f6-ca-certs\") pod \"kube-controller-manager-ci-4396.0.0-n-16ec4aa50e\" (UID: \"45cc6c520dd75e7ef18337535c1dd4f6\") " pod="kube-system/kube-controller-manager-ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:56.974778 kubelet[3314]: I0715 04:39:56.974705 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/45cc6c520dd75e7ef18337535c1dd4f6-flexvolume-dir\") pod \"kube-controller-manager-ci-4396.0.0-n-16ec4aa50e\" (UID: \"45cc6c520dd75e7ef18337535c1dd4f6\") " pod="kube-system/kube-controller-manager-ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:56.974778 kubelet[3314]: I0715 04:39:56.974741 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/45cc6c520dd75e7ef18337535c1dd4f6-k8s-certs\") pod \"kube-controller-manager-ci-4396.0.0-n-16ec4aa50e\" (UID: \"45cc6c520dd75e7ef18337535c1dd4f6\") " pod="kube-system/kube-controller-manager-ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:56.975433 kubelet[3314]: I0715 04:39:56.974750 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/45cc6c520dd75e7ef18337535c1dd4f6-kubeconfig\") pod \"kube-controller-manager-ci-4396.0.0-n-16ec4aa50e\" (UID: \"45cc6c520dd75e7ef18337535c1dd4f6\") " pod="kube-system/kube-controller-manager-ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:56.983150 kubelet[3314]: I0715 04:39:56.983118 3314 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 15 04:39:56.983289 kubelet[3314]: I0715 04:39:56.983270 3314 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 15 04:39:56.983375 kubelet[3314]: E0715 04:39:56.983305 3314 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4396.0.0-n-16ec4aa50e\" already exists" pod="kube-system/kube-scheduler-ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:56.983375 kubelet[3314]: E0715 04:39:56.983267 3314 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4396.0.0-n-16ec4aa50e\" already exists" pod="kube-system/kube-apiserver-ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:56.983486 kubelet[3314]: I0715 04:39:56.983468 3314 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 15 04:39:56.983516 kubelet[3314]: E0715 04:39:56.983493 3314 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4396.0.0-n-16ec4aa50e\" already exists" pod="kube-system/kube-controller-manager-ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:57.057336 kubelet[3314]: I0715 04:39:57.057306 3314 kubelet_node_status.go:75] "Attempting to register node" node="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:57.076049 kubelet[3314]: I0715 04:39:57.075957 3314 kubelet_node_status.go:124] "Node was previously registered" node="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:57.076049 kubelet[3314]: I0715 04:39:57.076028 3314 kubelet_node_status.go:78] "Successfully registered node" node="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:57.845228 kubelet[3314]: I0715 04:39:57.844273 3314 apiserver.go:52] "Watching apiserver" Jul 15 04:39:57.870654 kubelet[3314]: I0715 04:39:57.870625 3314 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 04:39:57.906638 kubelet[3314]: I0715 04:39:57.906487 3314 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:57.917818 kubelet[3314]: I0715 04:39:57.917620 3314 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 15 04:39:57.917979 kubelet[3314]: E0715 04:39:57.917935 3314 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4396.0.0-n-16ec4aa50e\" already exists" pod="kube-system/kube-controller-manager-ci-4396.0.0-n-16ec4aa50e" Jul 15 04:39:57.950467 kubelet[3314]: I0715 04:39:57.949783 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4396.0.0-n-16ec4aa50e" podStartSLOduration=2.949768556 podStartE2EDuration="2.949768556s" podCreationTimestamp="2025-07-15 04:39:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:39:57.937877889 +0000 UTC m=+1.135608619" watchObservedRunningTime="2025-07-15 04:39:57.949768556 +0000 UTC m=+1.147499286" Jul 15 04:39:57.959737 kubelet[3314]: I0715 04:39:57.959691 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4396.0.0-n-16ec4aa50e" podStartSLOduration=2.959680404 podStartE2EDuration="2.959680404s" podCreationTimestamp="2025-07-15 04:39:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:39:57.950270594 +0000 UTC m=+1.148001340" watchObservedRunningTime="2025-07-15 04:39:57.959680404 +0000 UTC m=+1.157411134" Jul 15 04:39:57.959831 kubelet[3314]: I0715 04:39:57.959755 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4396.0.0-n-16ec4aa50e" podStartSLOduration=2.959751367 podStartE2EDuration="2.959751367s" podCreationTimestamp="2025-07-15 04:39:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:39:57.959604978 +0000 UTC m=+1.157335708" watchObservedRunningTime="2025-07-15 04:39:57.959751367 +0000 UTC m=+1.157482097" Jul 15 04:39:58.552379 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jul 15 04:39:59.328198 update_engine[1863]: I20250715 04:39:59.328126 1863 update_attempter.cc:509] Updating boot flags... Jul 15 04:40:01.700547 kubelet[3314]: I0715 04:40:01.700514 3314 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 04:40:01.700969 containerd[1881]: time="2025-07-15T04:40:01.700827212Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 04:40:01.701422 kubelet[3314]: I0715 04:40:01.701217 3314 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 04:40:02.810570 systemd[1]: Created slice kubepods-besteffort-pod871aa836_14d4_47b4_b5cb_fed2c105fb84.slice - libcontainer container kubepods-besteffort-pod871aa836_14d4_47b4_b5cb_fed2c105fb84.slice. Jul 15 04:40:02.907160 kubelet[3314]: I0715 04:40:02.907123 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hbhg\" (UniqueName: \"kubernetes.io/projected/871aa836-14d4-47b4-b5cb-fed2c105fb84-kube-api-access-4hbhg\") pod \"kube-proxy-xws8j\" (UID: \"871aa836-14d4-47b4-b5cb-fed2c105fb84\") " pod="kube-system/kube-proxy-xws8j" Jul 15 04:40:02.907160 kubelet[3314]: I0715 04:40:02.907159 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/871aa836-14d4-47b4-b5cb-fed2c105fb84-kube-proxy\") pod \"kube-proxy-xws8j\" (UID: \"871aa836-14d4-47b4-b5cb-fed2c105fb84\") " pod="kube-system/kube-proxy-xws8j" Jul 15 04:40:02.907160 kubelet[3314]: I0715 04:40:02.907172 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/871aa836-14d4-47b4-b5cb-fed2c105fb84-xtables-lock\") pod \"kube-proxy-xws8j\" (UID: \"871aa836-14d4-47b4-b5cb-fed2c105fb84\") " pod="kube-system/kube-proxy-xws8j" Jul 15 04:40:02.907577 kubelet[3314]: I0715 04:40:02.907183 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/871aa836-14d4-47b4-b5cb-fed2c105fb84-lib-modules\") pod \"kube-proxy-xws8j\" (UID: \"871aa836-14d4-47b4-b5cb-fed2c105fb84\") " pod="kube-system/kube-proxy-xws8j" Jul 15 04:40:02.928064 systemd[1]: Created slice kubepods-besteffort-pod75e2ad85_4e94_4ab5_9f49_b09b8bb8b14e.slice - libcontainer container kubepods-besteffort-pod75e2ad85_4e94_4ab5_9f49_b09b8bb8b14e.slice. Jul 15 04:40:03.007621 kubelet[3314]: I0715 04:40:03.007531 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/75e2ad85-4e94-4ab5-9f49-b09b8bb8b14e-var-lib-calico\") pod \"tigera-operator-747864d56d-bprgm\" (UID: \"75e2ad85-4e94-4ab5-9f49-b09b8bb8b14e\") " pod="tigera-operator/tigera-operator-747864d56d-bprgm" Jul 15 04:40:03.007771 kubelet[3314]: I0715 04:40:03.007624 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4ltr\" (UniqueName: \"kubernetes.io/projected/75e2ad85-4e94-4ab5-9f49-b09b8bb8b14e-kube-api-access-x4ltr\") pod \"tigera-operator-747864d56d-bprgm\" (UID: \"75e2ad85-4e94-4ab5-9f49-b09b8bb8b14e\") " pod="tigera-operator/tigera-operator-747864d56d-bprgm" Jul 15 04:40:03.118548 containerd[1881]: time="2025-07-15T04:40:03.118438577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xws8j,Uid:871aa836-14d4-47b4-b5cb-fed2c105fb84,Namespace:kube-system,Attempt:0,}" Jul 15 04:40:03.178294 containerd[1881]: time="2025-07-15T04:40:03.178032061Z" level=info msg="connecting to shim 38c226a1c6e67760bedcaca2b97c861a3c19ce32e142a556bc824662defa6f94" address="unix:///run/containerd/s/52a1f9527de65b8c33738b04404f2bdec8881cb5657d59666694133bb91701a2" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:40:03.201831 systemd[1]: Started cri-containerd-38c226a1c6e67760bedcaca2b97c861a3c19ce32e142a556bc824662defa6f94.scope - libcontainer container 38c226a1c6e67760bedcaca2b97c861a3c19ce32e142a556bc824662defa6f94. Jul 15 04:40:03.222033 containerd[1881]: time="2025-07-15T04:40:03.221999288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xws8j,Uid:871aa836-14d4-47b4-b5cb-fed2c105fb84,Namespace:kube-system,Attempt:0,} returns sandbox id \"38c226a1c6e67760bedcaca2b97c861a3c19ce32e142a556bc824662defa6f94\"" Jul 15 04:40:03.230949 containerd[1881]: time="2025-07-15T04:40:03.230901527Z" level=info msg="CreateContainer within sandbox \"38c226a1c6e67760bedcaca2b97c861a3c19ce32e142a556bc824662defa6f94\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 04:40:03.231122 containerd[1881]: time="2025-07-15T04:40:03.230939409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-bprgm,Uid:75e2ad85-4e94-4ab5-9f49-b09b8bb8b14e,Namespace:tigera-operator,Attempt:0,}" Jul 15 04:40:03.286824 containerd[1881]: time="2025-07-15T04:40:03.286749466Z" level=info msg="Container e3d854ad2d6d9c8b43f76732ac667af03f0a01c803d035d0217a32ec375ab104: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:40:03.330877 containerd[1881]: time="2025-07-15T04:40:03.330825521Z" level=info msg="CreateContainer within sandbox \"38c226a1c6e67760bedcaca2b97c861a3c19ce32e142a556bc824662defa6f94\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e3d854ad2d6d9c8b43f76732ac667af03f0a01c803d035d0217a32ec375ab104\"" Jul 15 04:40:03.331840 containerd[1881]: time="2025-07-15T04:40:03.331688405Z" level=info msg="StartContainer for \"e3d854ad2d6d9c8b43f76732ac667af03f0a01c803d035d0217a32ec375ab104\"" Jul 15 04:40:03.333247 containerd[1881]: time="2025-07-15T04:40:03.332769888Z" level=info msg="connecting to shim e3d854ad2d6d9c8b43f76732ac667af03f0a01c803d035d0217a32ec375ab104" address="unix:///run/containerd/s/52a1f9527de65b8c33738b04404f2bdec8881cb5657d59666694133bb91701a2" protocol=ttrpc version=3 Jul 15 04:40:03.337427 containerd[1881]: time="2025-07-15T04:40:03.337393277Z" level=info msg="connecting to shim 120b2cf53ee5ce45624d343061d0b779d9bce5f7d46ee18cc88b92aa9313e189" address="unix:///run/containerd/s/00f17479c45cad7cee620d7879e9588045b1797e8dd0ae3fc4771c2c43713b1c" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:40:03.351852 systemd[1]: Started cri-containerd-e3d854ad2d6d9c8b43f76732ac667af03f0a01c803d035d0217a32ec375ab104.scope - libcontainer container e3d854ad2d6d9c8b43f76732ac667af03f0a01c803d035d0217a32ec375ab104. Jul 15 04:40:03.360939 systemd[1]: Started cri-containerd-120b2cf53ee5ce45624d343061d0b779d9bce5f7d46ee18cc88b92aa9313e189.scope - libcontainer container 120b2cf53ee5ce45624d343061d0b779d9bce5f7d46ee18cc88b92aa9313e189. Jul 15 04:40:03.393642 containerd[1881]: time="2025-07-15T04:40:03.393299066Z" level=info msg="StartContainer for \"e3d854ad2d6d9c8b43f76732ac667af03f0a01c803d035d0217a32ec375ab104\" returns successfully" Jul 15 04:40:03.407085 containerd[1881]: time="2025-07-15T04:40:03.407038422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-bprgm,Uid:75e2ad85-4e94-4ab5-9f49-b09b8bb8b14e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"120b2cf53ee5ce45624d343061d0b779d9bce5f7d46ee18cc88b92aa9313e189\"" Jul 15 04:40:03.409493 containerd[1881]: time="2025-07-15T04:40:03.409245373Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 15 04:40:04.017006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4200482774.mount: Deactivated successfully. Jul 15 04:40:04.113902 kubelet[3314]: I0715 04:40:04.113850 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xws8j" podStartSLOduration=2.113839711 podStartE2EDuration="2.113839711s" podCreationTimestamp="2025-07-15 04:40:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:40:03.927480831 +0000 UTC m=+7.125211561" watchObservedRunningTime="2025-07-15 04:40:04.113839711 +0000 UTC m=+7.311570441" Jul 15 04:40:04.801355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1636031563.mount: Deactivated successfully. Jul 15 04:40:07.270377 containerd[1881]: time="2025-07-15T04:40:07.270326627Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:07.274913 containerd[1881]: time="2025-07-15T04:40:07.274871141Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22151945" Jul 15 04:40:07.279380 containerd[1881]: time="2025-07-15T04:40:07.279337141Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:07.289061 containerd[1881]: time="2025-07-15T04:40:07.289012350Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:07.289458 containerd[1881]: time="2025-07-15T04:40:07.289318352Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 3.880047138s" Jul 15 04:40:07.289458 containerd[1881]: time="2025-07-15T04:40:07.289346809Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 15 04:40:07.300520 containerd[1881]: time="2025-07-15T04:40:07.300492416Z" level=info msg="CreateContainer within sandbox \"120b2cf53ee5ce45624d343061d0b779d9bce5f7d46ee18cc88b92aa9313e189\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 15 04:40:07.333914 containerd[1881]: time="2025-07-15T04:40:07.333866758Z" level=info msg="Container 25a8f0afafa8c15be3edac445d72f3c299a9365176a707177bb0a3bf816c93bb: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:40:07.352532 containerd[1881]: time="2025-07-15T04:40:07.352490127Z" level=info msg="CreateContainer within sandbox \"120b2cf53ee5ce45624d343061d0b779d9bce5f7d46ee18cc88b92aa9313e189\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"25a8f0afafa8c15be3edac445d72f3c299a9365176a707177bb0a3bf816c93bb\"" Jul 15 04:40:07.352908 containerd[1881]: time="2025-07-15T04:40:07.352881940Z" level=info msg="StartContainer for \"25a8f0afafa8c15be3edac445d72f3c299a9365176a707177bb0a3bf816c93bb\"" Jul 15 04:40:07.353475 containerd[1881]: time="2025-07-15T04:40:07.353446998Z" level=info msg="connecting to shim 25a8f0afafa8c15be3edac445d72f3c299a9365176a707177bb0a3bf816c93bb" address="unix:///run/containerd/s/00f17479c45cad7cee620d7879e9588045b1797e8dd0ae3fc4771c2c43713b1c" protocol=ttrpc version=3 Jul 15 04:40:07.370835 systemd[1]: Started cri-containerd-25a8f0afafa8c15be3edac445d72f3c299a9365176a707177bb0a3bf816c93bb.scope - libcontainer container 25a8f0afafa8c15be3edac445d72f3c299a9365176a707177bb0a3bf816c93bb. Jul 15 04:40:07.397685 containerd[1881]: time="2025-07-15T04:40:07.397640465Z" level=info msg="StartContainer for \"25a8f0afafa8c15be3edac445d72f3c299a9365176a707177bb0a3bf816c93bb\" returns successfully" Jul 15 04:40:11.593026 kubelet[3314]: I0715 04:40:11.592911 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-bprgm" podStartSLOduration=5.710902562 podStartE2EDuration="9.592896995s" podCreationTimestamp="2025-07-15 04:40:02 +0000 UTC" firstStartedPulling="2025-07-15 04:40:03.40816589 +0000 UTC m=+6.605896620" lastFinishedPulling="2025-07-15 04:40:07.290160275 +0000 UTC m=+10.487891053" observedRunningTime="2025-07-15 04:40:07.935201651 +0000 UTC m=+11.132932389" watchObservedRunningTime="2025-07-15 04:40:11.592896995 +0000 UTC m=+14.790627725" Jul 15 04:40:12.472398 sudo[2349]: pam_unix(sudo:session): session closed for user root Jul 15 04:40:12.558891 sshd[2348]: Connection closed by 10.200.16.10 port 51758 Jul 15 04:40:12.560907 sshd-session[2345]: pam_unix(sshd:session): session closed for user core Jul 15 04:40:12.565229 systemd[1]: sshd@6-10.200.20.37:22-10.200.16.10:51758.service: Deactivated successfully. Jul 15 04:40:12.568375 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 04:40:12.569260 systemd[1]: session-9.scope: Consumed 4.312s CPU time, 222.5M memory peak. Jul 15 04:40:12.572111 systemd-logind[1861]: Session 9 logged out. Waiting for processes to exit. Jul 15 04:40:12.573820 systemd-logind[1861]: Removed session 9. Jul 15 04:40:17.283397 systemd[1]: Created slice kubepods-besteffort-podafe386d7_77c3_40fe_a66b_08d25b88706b.slice - libcontainer container kubepods-besteffort-podafe386d7_77c3_40fe_a66b_08d25b88706b.slice. Jul 15 04:40:17.380358 kubelet[3314]: I0715 04:40:17.380262 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afe386d7-77c3-40fe-a66b-08d25b88706b-tigera-ca-bundle\") pod \"calico-typha-545468d9cf-52vw7\" (UID: \"afe386d7-77c3-40fe-a66b-08d25b88706b\") " pod="calico-system/calico-typha-545468d9cf-52vw7" Jul 15 04:40:17.381123 kubelet[3314]: I0715 04:40:17.380951 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/afe386d7-77c3-40fe-a66b-08d25b88706b-typha-certs\") pod \"calico-typha-545468d9cf-52vw7\" (UID: \"afe386d7-77c3-40fe-a66b-08d25b88706b\") " pod="calico-system/calico-typha-545468d9cf-52vw7" Jul 15 04:40:17.381123 kubelet[3314]: I0715 04:40:17.380980 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l27tt\" (UniqueName: \"kubernetes.io/projected/afe386d7-77c3-40fe-a66b-08d25b88706b-kube-api-access-l27tt\") pod \"calico-typha-545468d9cf-52vw7\" (UID: \"afe386d7-77c3-40fe-a66b-08d25b88706b\") " pod="calico-system/calico-typha-545468d9cf-52vw7" Jul 15 04:40:17.417013 systemd[1]: Created slice kubepods-besteffort-pod19239508_4175_4af8_977a_71c40403cd07.slice - libcontainer container kubepods-besteffort-pod19239508_4175_4af8_977a_71c40403cd07.slice. Jul 15 04:40:17.481874 kubelet[3314]: I0715 04:40:17.481836 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/19239508-4175-4af8-977a-71c40403cd07-cni-net-dir\") pod \"calico-node-6278d\" (UID: \"19239508-4175-4af8-977a-71c40403cd07\") " pod="calico-system/calico-node-6278d" Jul 15 04:40:17.482195 kubelet[3314]: I0715 04:40:17.482145 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/19239508-4175-4af8-977a-71c40403cd07-policysync\") pod \"calico-node-6278d\" (UID: \"19239508-4175-4af8-977a-71c40403cd07\") " pod="calico-system/calico-node-6278d" Jul 15 04:40:17.482881 kubelet[3314]: I0715 04:40:17.482175 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/19239508-4175-4af8-977a-71c40403cd07-cni-log-dir\") pod \"calico-node-6278d\" (UID: \"19239508-4175-4af8-977a-71c40403cd07\") " pod="calico-system/calico-node-6278d" Jul 15 04:40:17.482881 kubelet[3314]: I0715 04:40:17.482809 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/19239508-4175-4af8-977a-71c40403cd07-var-lib-calico\") pod \"calico-node-6278d\" (UID: \"19239508-4175-4af8-977a-71c40403cd07\") " pod="calico-system/calico-node-6278d" Jul 15 04:40:17.484819 kubelet[3314]: I0715 04:40:17.482823 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rqhj\" (UniqueName: \"kubernetes.io/projected/19239508-4175-4af8-977a-71c40403cd07-kube-api-access-6rqhj\") pod \"calico-node-6278d\" (UID: \"19239508-4175-4af8-977a-71c40403cd07\") " pod="calico-system/calico-node-6278d" Jul 15 04:40:17.484819 kubelet[3314]: I0715 04:40:17.483019 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/19239508-4175-4af8-977a-71c40403cd07-flexvol-driver-host\") pod \"calico-node-6278d\" (UID: \"19239508-4175-4af8-977a-71c40403cd07\") " pod="calico-system/calico-node-6278d" Jul 15 04:40:17.484819 kubelet[3314]: I0715 04:40:17.483044 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/19239508-4175-4af8-977a-71c40403cd07-node-certs\") pod \"calico-node-6278d\" (UID: \"19239508-4175-4af8-977a-71c40403cd07\") " pod="calico-system/calico-node-6278d" Jul 15 04:40:17.484819 kubelet[3314]: I0715 04:40:17.483055 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19239508-4175-4af8-977a-71c40403cd07-tigera-ca-bundle\") pod \"calico-node-6278d\" (UID: \"19239508-4175-4af8-977a-71c40403cd07\") " pod="calico-system/calico-node-6278d" Jul 15 04:40:17.484819 kubelet[3314]: I0715 04:40:17.483079 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/19239508-4175-4af8-977a-71c40403cd07-cni-bin-dir\") pod \"calico-node-6278d\" (UID: \"19239508-4175-4af8-977a-71c40403cd07\") " pod="calico-system/calico-node-6278d" Jul 15 04:40:17.484911 kubelet[3314]: I0715 04:40:17.483088 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19239508-4175-4af8-977a-71c40403cd07-lib-modules\") pod \"calico-node-6278d\" (UID: \"19239508-4175-4af8-977a-71c40403cd07\") " pod="calico-system/calico-node-6278d" Jul 15 04:40:17.484911 kubelet[3314]: I0715 04:40:17.483097 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/19239508-4175-4af8-977a-71c40403cd07-var-run-calico\") pod \"calico-node-6278d\" (UID: \"19239508-4175-4af8-977a-71c40403cd07\") " pod="calico-system/calico-node-6278d" Jul 15 04:40:17.484911 kubelet[3314]: I0715 04:40:17.483107 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19239508-4175-4af8-977a-71c40403cd07-xtables-lock\") pod \"calico-node-6278d\" (UID: \"19239508-4175-4af8-977a-71c40403cd07\") " pod="calico-system/calico-node-6278d" Jul 15 04:40:17.565080 kubelet[3314]: E0715 04:40:17.564375 3314 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qm58g" podUID="e8d9b3f7-0f5c-46b4-bf2e-d2353a721e27" Jul 15 04:40:17.583753 kubelet[3314]: I0715 04:40:17.583262 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e8d9b3f7-0f5c-46b4-bf2e-d2353a721e27-registration-dir\") pod \"csi-node-driver-qm58g\" (UID: \"e8d9b3f7-0f5c-46b4-bf2e-d2353a721e27\") " pod="calico-system/csi-node-driver-qm58g" Jul 15 04:40:17.584028 kubelet[3314]: I0715 04:40:17.584010 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czkwc\" (UniqueName: \"kubernetes.io/projected/e8d9b3f7-0f5c-46b4-bf2e-d2353a721e27-kube-api-access-czkwc\") pod \"csi-node-driver-qm58g\" (UID: \"e8d9b3f7-0f5c-46b4-bf2e-d2353a721e27\") " pod="calico-system/csi-node-driver-qm58g" Jul 15 04:40:17.584401 kubelet[3314]: I0715 04:40:17.584382 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e8d9b3f7-0f5c-46b4-bf2e-d2353a721e27-socket-dir\") pod \"csi-node-driver-qm58g\" (UID: \"e8d9b3f7-0f5c-46b4-bf2e-d2353a721e27\") " pod="calico-system/csi-node-driver-qm58g" Jul 15 04:40:17.584498 kubelet[3314]: I0715 04:40:17.584485 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e8d9b3f7-0f5c-46b4-bf2e-d2353a721e27-varrun\") pod \"csi-node-driver-qm58g\" (UID: \"e8d9b3f7-0f5c-46b4-bf2e-d2353a721e27\") " pod="calico-system/csi-node-driver-qm58g" Jul 15 04:40:17.585193 kubelet[3314]: I0715 04:40:17.584838 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e8d9b3f7-0f5c-46b4-bf2e-d2353a721e27-kubelet-dir\") pod \"csi-node-driver-qm58g\" (UID: \"e8d9b3f7-0f5c-46b4-bf2e-d2353a721e27\") " pod="calico-system/csi-node-driver-qm58g" Jul 15 04:40:17.585699 kubelet[3314]: E0715 04:40:17.585683 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:17.585859 kubelet[3314]: W0715 04:40:17.585788 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:17.585859 kubelet[3314]: E0715 04:40:17.585810 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:17.586149 kubelet[3314]: E0715 04:40:17.586136 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:17.587386 kubelet[3314]: W0715 04:40:17.587373 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:17.587540 kubelet[3314]: E0715 04:40:17.587457 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:17.588099 kubelet[3314]: E0715 04:40:17.587919 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:17.588099 kubelet[3314]: W0715 04:40:17.588021 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:17.588099 kubelet[3314]: E0715 04:40:17.588036 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:17.588456 containerd[1881]: time="2025-07-15T04:40:17.588417864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-545468d9cf-52vw7,Uid:afe386d7-77c3-40fe-a66b-08d25b88706b,Namespace:calico-system,Attempt:0,}" Jul 15 04:40:17.590363 kubelet[3314]: E0715 04:40:17.590012 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:17.590363 kubelet[3314]: W0715 04:40:17.590024 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:17.591810 kubelet[3314]: E0715 04:40:17.590036 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:17.591810 kubelet[3314]: E0715 04:40:17.591640 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:17.591810 kubelet[3314]: W0715 04:40:17.591651 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:17.591810 kubelet[3314]: E0715 04:40:17.591662 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:17.591810 kubelet[3314]: E0715 04:40:17.591786 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:17.591810 kubelet[3314]: W0715 04:40:17.591792 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:17.591810 kubelet[3314]: E0715 04:40:17.591800 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:17.592178 kubelet[3314]: E0715 04:40:17.592164 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:17.592291 kubelet[3314]: W0715 04:40:17.592257 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:17.592291 kubelet[3314]: E0715 04:40:17.592274 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:17.593849 kubelet[3314]: E0715 04:40:17.593802 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:17.593849 kubelet[3314]: W0715 04:40:17.593816 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:17.593849 kubelet[3314]: E0715 04:40:17.593828 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:17.601906 kubelet[3314]: E0715 04:40:17.601879 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:17.601906 kubelet[3314]: W0715 04:40:17.601892 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:17.602026 kubelet[3314]: E0715 04:40:17.601997 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:17.657641 containerd[1881]: time="2025-07-15T04:40:17.657521385Z" level=info msg="connecting to shim e135fd7bf619086195bce4608a077252d4f034854a7a8431ff01a721c64945cb" address="unix:///run/containerd/s/a26f30343a96bf8c8d494aba61f9c5c01654b93bd3bbc34a9b2b8b5f6b363e49" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:40:17.676866 systemd[1]: Started cri-containerd-e135fd7bf619086195bce4608a077252d4f034854a7a8431ff01a721c64945cb.scope - libcontainer container e135fd7bf619086195bce4608a077252d4f034854a7a8431ff01a721c64945cb. Jul 15 04:40:17.685609 kubelet[3314]: E0715 04:40:17.685514 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:17.685609 kubelet[3314]: W0715 04:40:17.685533 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:17.685609 kubelet[3314]: E0715 04:40:17.685567 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:17.686250 kubelet[3314]: E0715 04:40:17.685724 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:17.686250 kubelet[3314]: W0715 04:40:17.685731 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:17.686250 kubelet[3314]: E0715 04:40:17.685739 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:17.686250 kubelet[3314]: E0715 04:40:17.685873 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:17.686250 kubelet[3314]: W0715 04:40:17.685882 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:17.686250 kubelet[3314]: E0715 04:40:17.685891 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:17.686250 kubelet[3314]: E0715 04:40:17.686054 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:17.686250 kubelet[3314]: W0715 04:40:17.686061 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:17.686250 kubelet[3314]: E0715 04:40:17.686068 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:17.686250 kubelet[3314]: E0715 04:40:17.686190 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:17.686858 kubelet[3314]: W0715 04:40:17.686196 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:17.686858 kubelet[3314]: E0715 04:40:17.686203 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:17.686858 kubelet[3314]: E0715 04:40:17.686332 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:17.686858 kubelet[3314]: W0715 04:40:17.686340 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:17.686858 kubelet[3314]: E0715 04:40:17.686346 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:17.686858 kubelet[3314]: E0715 04:40:17.686447 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:17.686858 kubelet[3314]: W0715 04:40:17.686452 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:17.686858 kubelet[3314]: E0715 04:40:17.686462 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:17.686858 kubelet[3314]: E0715 04:40:17.686543 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:17.686858 kubelet[3314]: W0715 04:40:17.686548 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:17.686994 kubelet[3314]: E0715 04:40:17.686553 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:17.686994 kubelet[3314]: E0715 04:40:17.686698 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:17.686994 kubelet[3314]: W0715 04:40:17.686704 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:17.686994 kubelet[3314]: E0715 04:40:17.686722 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:17.686994 kubelet[3314]: E0715 04:40:17.686830 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:17.686994 kubelet[3314]: W0715 04:40:17.686836 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:17.686994 kubelet[3314]: E0715 04:40:17.686841 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:17.686994 kubelet[3314]: E0715 04:40:17.686929 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:17.686994 kubelet[3314]: W0715 04:40:17.686934 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:17.686994 kubelet[3314]: E0715 04:40:17.686946 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:17.687122 kubelet[3314]: E0715 04:40:17.687029 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:17.687122 kubelet[3314]: W0715 04:40:17.687034 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:17.687122 kubelet[3314]: E0715 04:40:17.687039 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:17.687165 kubelet[3314]: E0715 04:40:17.687138 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:17.687165 kubelet[3314]: W0715 04:40:17.687143 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:17.687165 kubelet[3314]: E0715 04:40:17.687148 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:17.687679 kubelet[3314]: E0715 04:40:17.687380 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:17.687679 kubelet[3314]: W0715 04:40:17.687392 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:17.687679 kubelet[3314]: E0715 04:40:17.687399 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:17.687679 kubelet[3314]: E0715 04:40:17.687629 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:17.687679 kubelet[3314]: W0715 04:40:17.687637 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:17.687679 kubelet[3314]: E0715 04:40:17.687645 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:17.688126 kubelet[3314]: E0715 04:40:17.688016 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:17.688126 kubelet[3314]: W0715 04:40:17.688031 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:17.688750 kubelet[3314]: E0715 04:40:17.688602 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:17.689235 kubelet[3314]: E0715 04:40:17.688804 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:17.689235 kubelet[3314]: W0715 04:40:17.688825 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:17.689235 kubelet[3314]: E0715 04:40:17.688834 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:17.689235 kubelet[3314]: E0715 04:40:17.688958 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:17.689235 kubelet[3314]: W0715 04:40:17.688964 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:17.689235 kubelet[3314]: E0715 04:40:17.688983 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:17.689235 kubelet[3314]: E0715 04:40:17.689080 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:17.689235 kubelet[3314]: W0715 04:40:17.689085 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:17.689235 kubelet[3314]: E0715 04:40:17.689091 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:17.689235 kubelet[3314]: E0715 04:40:17.689185 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:17.689629 kubelet[3314]: W0715 04:40:17.689190 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:17.689629 kubelet[3314]: E0715 04:40:17.689196 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:17.690110 kubelet[3314]: E0715 04:40:17.689744 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:17.690110 kubelet[3314]: W0715 04:40:17.689756 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:17.690110 kubelet[3314]: E0715 04:40:17.689766 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:17.690110 kubelet[3314]: E0715 04:40:17.689927 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:17.690110 kubelet[3314]: W0715 04:40:17.689934 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:17.690110 kubelet[3314]: E0715 04:40:17.689953 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:17.690698 kubelet[3314]: E0715 04:40:17.690307 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:17.690698 kubelet[3314]: W0715 04:40:17.690337 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:17.690698 kubelet[3314]: E0715 04:40:17.690349 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:17.691723 kubelet[3314]: E0715 04:40:17.690859 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:17.691723 kubelet[3314]: W0715 04:40:17.690872 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:17.691723 kubelet[3314]: E0715 04:40:17.690882 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:17.691723 kubelet[3314]: E0715 04:40:17.691025 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:17.691723 kubelet[3314]: W0715 04:40:17.691032 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:17.691723 kubelet[3314]: E0715 04:40:17.691039 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:17.697012 kubelet[3314]: E0715 04:40:17.696993 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:17.697012 kubelet[3314]: W0715 04:40:17.697007 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:17.697107 kubelet[3314]: E0715 04:40:17.697018 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:17.723721 containerd[1881]: time="2025-07-15T04:40:17.722992444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6278d,Uid:19239508-4175-4af8-977a-71c40403cd07,Namespace:calico-system,Attempt:0,}" Jul 15 04:40:17.750740 containerd[1881]: time="2025-07-15T04:40:17.750493410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-545468d9cf-52vw7,Uid:afe386d7-77c3-40fe-a66b-08d25b88706b,Namespace:calico-system,Attempt:0,} returns sandbox id \"e135fd7bf619086195bce4608a077252d4f034854a7a8431ff01a721c64945cb\"" Jul 15 04:40:17.753435 containerd[1881]: time="2025-07-15T04:40:17.753400005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 15 04:40:17.821487 containerd[1881]: time="2025-07-15T04:40:17.821148039Z" level=info msg="connecting to shim bd87c5d96887021cbcb1da24499555295215c066f861e8a112739571a21d05f3" address="unix:///run/containerd/s/d7380c0097708396908ee1a93b3adc41308a7adbf316a706fc7cafdf997bb9da" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:40:17.848880 systemd[1]: Started cri-containerd-bd87c5d96887021cbcb1da24499555295215c066f861e8a112739571a21d05f3.scope - libcontainer container bd87c5d96887021cbcb1da24499555295215c066f861e8a112739571a21d05f3. Jul 15 04:40:17.883062 containerd[1881]: time="2025-07-15T04:40:17.883022018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6278d,Uid:19239508-4175-4af8-977a-71c40403cd07,Namespace:calico-system,Attempt:0,} returns sandbox id \"bd87c5d96887021cbcb1da24499555295215c066f861e8a112739571a21d05f3\"" Jul 15 04:40:18.870971 kubelet[3314]: E0715 04:40:18.870864 3314 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qm58g" podUID="e8d9b3f7-0f5c-46b4-bf2e-d2353a721e27" Jul 15 04:40:19.112931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2573553943.mount: Deactivated successfully. Jul 15 04:40:20.268678 containerd[1881]: time="2025-07-15T04:40:20.268215967Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:20.280128 containerd[1881]: time="2025-07-15T04:40:20.280096227Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 15 04:40:20.284816 containerd[1881]: time="2025-07-15T04:40:20.284777550Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:20.291705 containerd[1881]: time="2025-07-15T04:40:20.291664709Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:20.292336 containerd[1881]: time="2025-07-15T04:40:20.292217727Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 2.538688679s" Jul 15 04:40:20.292336 containerd[1881]: time="2025-07-15T04:40:20.292244776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 15 04:40:20.294457 containerd[1881]: time="2025-07-15T04:40:20.293971622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 15 04:40:20.309501 containerd[1881]: time="2025-07-15T04:40:20.309472275Z" level=info msg="CreateContainer within sandbox \"e135fd7bf619086195bce4608a077252d4f034854a7a8431ff01a721c64945cb\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 15 04:40:20.341795 containerd[1881]: time="2025-07-15T04:40:20.341202662Z" level=info msg="Container eaac054a357a43fde73b60ca59c74f1c92fe46aa4b574fd28aaef60e5e464d3c: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:40:20.369455 containerd[1881]: time="2025-07-15T04:40:20.369400993Z" level=info msg="CreateContainer within sandbox \"e135fd7bf619086195bce4608a077252d4f034854a7a8431ff01a721c64945cb\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"eaac054a357a43fde73b60ca59c74f1c92fe46aa4b574fd28aaef60e5e464d3c\"" Jul 15 04:40:20.370451 containerd[1881]: time="2025-07-15T04:40:20.370217730Z" level=info msg="StartContainer for \"eaac054a357a43fde73b60ca59c74f1c92fe46aa4b574fd28aaef60e5e464d3c\"" Jul 15 04:40:20.371738 containerd[1881]: time="2025-07-15T04:40:20.371653135Z" level=info msg="connecting to shim eaac054a357a43fde73b60ca59c74f1c92fe46aa4b574fd28aaef60e5e464d3c" address="unix:///run/containerd/s/a26f30343a96bf8c8d494aba61f9c5c01654b93bd3bbc34a9b2b8b5f6b363e49" protocol=ttrpc version=3 Jul 15 04:40:20.392827 systemd[1]: Started cri-containerd-eaac054a357a43fde73b60ca59c74f1c92fe46aa4b574fd28aaef60e5e464d3c.scope - libcontainer container eaac054a357a43fde73b60ca59c74f1c92fe46aa4b574fd28aaef60e5e464d3c. Jul 15 04:40:20.423885 containerd[1881]: time="2025-07-15T04:40:20.423830682Z" level=info msg="StartContainer for \"eaac054a357a43fde73b60ca59c74f1c92fe46aa4b574fd28aaef60e5e464d3c\" returns successfully" Jul 15 04:40:20.871206 kubelet[3314]: E0715 04:40:20.870902 3314 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qm58g" podUID="e8d9b3f7-0f5c-46b4-bf2e-d2353a721e27" Jul 15 04:40:20.958983 kubelet[3314]: I0715 04:40:20.958689 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-545468d9cf-52vw7" podStartSLOduration=1.418391673 podStartE2EDuration="3.95820949s" podCreationTimestamp="2025-07-15 04:40:17 +0000 UTC" firstStartedPulling="2025-07-15 04:40:17.753125452 +0000 UTC m=+20.950856182" lastFinishedPulling="2025-07-15 04:40:20.292943269 +0000 UTC m=+23.490673999" observedRunningTime="2025-07-15 04:40:20.958080989 +0000 UTC m=+24.155811719" watchObservedRunningTime="2025-07-15 04:40:20.95820949 +0000 UTC m=+24.155940220" Jul 15 04:40:20.996167 kubelet[3314]: E0715 04:40:20.996135 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:20.996427 kubelet[3314]: W0715 04:40:20.996314 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:20.996427 kubelet[3314]: E0715 04:40:20.996339 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:20.996693 kubelet[3314]: E0715 04:40:20.996622 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:20.996799 kubelet[3314]: W0715 04:40:20.996763 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:20.996945 kubelet[3314]: E0715 04:40:20.996847 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:20.997060 kubelet[3314]: E0715 04:40:20.997048 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:20.997112 kubelet[3314]: W0715 04:40:20.997103 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:20.997260 kubelet[3314]: E0715 04:40:20.997243 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:20.997617 kubelet[3314]: E0715 04:40:20.997516 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:20.997617 kubelet[3314]: W0715 04:40:20.997529 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:20.997617 kubelet[3314]: E0715 04:40:20.997539 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:20.997808 kubelet[3314]: E0715 04:40:20.997795 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:20.997857 kubelet[3314]: W0715 04:40:20.997847 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:20.997898 kubelet[3314]: E0715 04:40:20.997889 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:20.998156 kubelet[3314]: E0715 04:40:20.998073 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:20.998156 kubelet[3314]: W0715 04:40:20.998084 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:20.998156 kubelet[3314]: E0715 04:40:20.998094 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:20.998313 kubelet[3314]: E0715 04:40:20.998301 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:20.998357 kubelet[3314]: W0715 04:40:20.998348 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:20.998500 kubelet[3314]: E0715 04:40:20.998395 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:20.999032 kubelet[3314]: E0715 04:40:20.998909 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:20.999032 kubelet[3314]: W0715 04:40:20.998924 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:20.999032 kubelet[3314]: E0715 04:40:20.998935 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:20.999201 kubelet[3314]: E0715 04:40:20.999188 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:20.999249 kubelet[3314]: W0715 04:40:20.999239 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:20.999303 kubelet[3314]: E0715 04:40:20.999290 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:20.999555 kubelet[3314]: E0715 04:40:20.999474 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:20.999555 kubelet[3314]: W0715 04:40:20.999485 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:20.999555 kubelet[3314]: E0715 04:40:20.999495 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:20.999724 kubelet[3314]: E0715 04:40:20.999695 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:20.999776 kubelet[3314]: W0715 04:40:20.999766 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:20.999830 kubelet[3314]: E0715 04:40:20.999820 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:21.000048 kubelet[3314]: E0715 04:40:21.000015 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:21.000103 kubelet[3314]: W0715 04:40:21.000026 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:21.000160 kubelet[3314]: E0715 04:40:21.000150 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:21.000379 kubelet[3314]: E0715 04:40:21.000367 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:21.000459 kubelet[3314]: W0715 04:40:21.000426 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:21.000510 kubelet[3314]: E0715 04:40:21.000499 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:21.000739 kubelet[3314]: E0715 04:40:21.000727 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:21.000809 kubelet[3314]: W0715 04:40:21.000799 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:21.000944 kubelet[3314]: E0715 04:40:21.000870 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:21.001178 kubelet[3314]: E0715 04:40:21.001117 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:21.001178 kubelet[3314]: W0715 04:40:21.001128 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:21.001178 kubelet[3314]: E0715 04:40:21.001137 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:21.005313 kubelet[3314]: E0715 04:40:21.005295 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:21.005313 kubelet[3314]: W0715 04:40:21.005309 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:21.005415 kubelet[3314]: E0715 04:40:21.005319 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:21.005468 kubelet[3314]: E0715 04:40:21.005458 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:21.005468 kubelet[3314]: W0715 04:40:21.005464 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:21.005638 kubelet[3314]: E0715 04:40:21.005471 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:21.005751 kubelet[3314]: E0715 04:40:21.005737 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:21.005817 kubelet[3314]: W0715 04:40:21.005803 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:21.005866 kubelet[3314]: E0715 04:40:21.005856 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:21.006061 kubelet[3314]: E0715 04:40:21.006051 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:21.006143 kubelet[3314]: W0715 04:40:21.006119 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:21.006143 kubelet[3314]: E0715 04:40:21.006132 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:21.006432 kubelet[3314]: E0715 04:40:21.006355 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:21.006432 kubelet[3314]: W0715 04:40:21.006366 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:21.006432 kubelet[3314]: E0715 04:40:21.006374 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:21.006654 kubelet[3314]: E0715 04:40:21.006643 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:21.006858 kubelet[3314]: W0715 04:40:21.006725 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:21.006858 kubelet[3314]: E0715 04:40:21.006739 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:21.007206 kubelet[3314]: E0715 04:40:21.007161 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:21.007206 kubelet[3314]: W0715 04:40:21.007175 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:21.007206 kubelet[3314]: E0715 04:40:21.007191 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:21.007496 kubelet[3314]: E0715 04:40:21.007480 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:21.007496 kubelet[3314]: W0715 04:40:21.007492 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:21.007979 kubelet[3314]: E0715 04:40:21.007502 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:21.007979 kubelet[3314]: E0715 04:40:21.007680 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:21.007979 kubelet[3314]: W0715 04:40:21.007689 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:21.007979 kubelet[3314]: E0715 04:40:21.007701 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:21.007979 kubelet[3314]: E0715 04:40:21.007838 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:21.007979 kubelet[3314]: W0715 04:40:21.007845 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:21.007979 kubelet[3314]: E0715 04:40:21.007852 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:21.009166 kubelet[3314]: E0715 04:40:21.007988 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:21.009166 kubelet[3314]: W0715 04:40:21.007995 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:21.009166 kubelet[3314]: E0715 04:40:21.008002 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:21.009166 kubelet[3314]: E0715 04:40:21.008105 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:21.009166 kubelet[3314]: W0715 04:40:21.008111 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:21.009166 kubelet[3314]: E0715 04:40:21.008118 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:21.009166 kubelet[3314]: E0715 04:40:21.008246 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:21.009166 kubelet[3314]: W0715 04:40:21.008252 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:21.009166 kubelet[3314]: E0715 04:40:21.008259 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:21.009166 kubelet[3314]: E0715 04:40:21.008478 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:21.009307 kubelet[3314]: W0715 04:40:21.008488 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:21.009307 kubelet[3314]: E0715 04:40:21.008499 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:21.009307 kubelet[3314]: E0715 04:40:21.008654 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:21.009307 kubelet[3314]: W0715 04:40:21.008663 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:21.009307 kubelet[3314]: E0715 04:40:21.008671 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:21.010159 kubelet[3314]: E0715 04:40:21.009764 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:21.010159 kubelet[3314]: W0715 04:40:21.009780 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:21.010159 kubelet[3314]: E0715 04:40:21.009790 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:21.010490 kubelet[3314]: E0715 04:40:21.010378 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:21.010490 kubelet[3314]: W0715 04:40:21.010390 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:21.010490 kubelet[3314]: E0715 04:40:21.010400 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:21.010647 kubelet[3314]: E0715 04:40:21.010638 3314 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:21.010752 kubelet[3314]: W0715 04:40:21.010692 3314 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:21.010752 kubelet[3314]: E0715 04:40:21.010707 3314 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:21.810132 containerd[1881]: time="2025-07-15T04:40:21.810079908Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:21.814224 containerd[1881]: time="2025-07-15T04:40:21.814085010Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 15 04:40:21.817213 containerd[1881]: time="2025-07-15T04:40:21.817178787Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:21.823302 containerd[1881]: time="2025-07-15T04:40:21.823238513Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:21.823813 containerd[1881]: time="2025-07-15T04:40:21.823526538Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.529526075s" Jul 15 04:40:21.823813 containerd[1881]: time="2025-07-15T04:40:21.823557907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 15 04:40:21.830808 containerd[1881]: time="2025-07-15T04:40:21.830780197Z" level=info msg="CreateContainer within sandbox \"bd87c5d96887021cbcb1da24499555295215c066f861e8a112739571a21d05f3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 15 04:40:21.862381 containerd[1881]: time="2025-07-15T04:40:21.861634820Z" level=info msg="Container f4460cab851e3ca649433ae90b9a0001232ac1b658e926fcd3967f2f4b04ef3c: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:40:21.863428 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2056573796.mount: Deactivated successfully. Jul 15 04:40:21.885750 containerd[1881]: time="2025-07-15T04:40:21.885691141Z" level=info msg="CreateContainer within sandbox \"bd87c5d96887021cbcb1da24499555295215c066f861e8a112739571a21d05f3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f4460cab851e3ca649433ae90b9a0001232ac1b658e926fcd3967f2f4b04ef3c\"" Jul 15 04:40:21.886398 containerd[1881]: time="2025-07-15T04:40:21.886343698Z" level=info msg="StartContainer for \"f4460cab851e3ca649433ae90b9a0001232ac1b658e926fcd3967f2f4b04ef3c\"" Jul 15 04:40:21.887883 containerd[1881]: time="2025-07-15T04:40:21.887823264Z" level=info msg="connecting to shim f4460cab851e3ca649433ae90b9a0001232ac1b658e926fcd3967f2f4b04ef3c" address="unix:///run/containerd/s/d7380c0097708396908ee1a93b3adc41308a7adbf316a706fc7cafdf997bb9da" protocol=ttrpc version=3 Jul 15 04:40:21.906889 systemd[1]: Started cri-containerd-f4460cab851e3ca649433ae90b9a0001232ac1b658e926fcd3967f2f4b04ef3c.scope - libcontainer container f4460cab851e3ca649433ae90b9a0001232ac1b658e926fcd3967f2f4b04ef3c. Jul 15 04:40:21.948697 systemd[1]: cri-containerd-f4460cab851e3ca649433ae90b9a0001232ac1b658e926fcd3967f2f4b04ef3c.scope: Deactivated successfully. Jul 15 04:40:21.949806 containerd[1881]: time="2025-07-15T04:40:21.949767877Z" level=info msg="StartContainer for \"f4460cab851e3ca649433ae90b9a0001232ac1b658e926fcd3967f2f4b04ef3c\" returns successfully" Jul 15 04:40:21.953062 containerd[1881]: time="2025-07-15T04:40:21.952938544Z" level=info msg="received exit event container_id:\"f4460cab851e3ca649433ae90b9a0001232ac1b658e926fcd3967f2f4b04ef3c\" id:\"f4460cab851e3ca649433ae90b9a0001232ac1b658e926fcd3967f2f4b04ef3c\" pid:4054 exited_at:{seconds:1752554421 nanos:952298772}" Jul 15 04:40:21.953862 containerd[1881]: time="2025-07-15T04:40:21.953150423Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f4460cab851e3ca649433ae90b9a0001232ac1b658e926fcd3967f2f4b04ef3c\" id:\"f4460cab851e3ca649433ae90b9a0001232ac1b658e926fcd3967f2f4b04ef3c\" pid:4054 exited_at:{seconds:1752554421 nanos:952298772}" Jul 15 04:40:21.956838 kubelet[3314]: I0715 04:40:21.956814 3314 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 04:40:21.978645 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4460cab851e3ca649433ae90b9a0001232ac1b658e926fcd3967f2f4b04ef3c-rootfs.mount: Deactivated successfully. Jul 15 04:40:22.871024 kubelet[3314]: E0715 04:40:22.870904 3314 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qm58g" podUID="e8d9b3f7-0f5c-46b4-bf2e-d2353a721e27" Jul 15 04:40:23.964214 containerd[1881]: time="2025-07-15T04:40:23.964144519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 15 04:40:24.870758 kubelet[3314]: E0715 04:40:24.870453 3314 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qm58g" podUID="e8d9b3f7-0f5c-46b4-bf2e-d2353a721e27" Jul 15 04:40:26.747049 containerd[1881]: time="2025-07-15T04:40:26.746992713Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:26.749738 containerd[1881]: time="2025-07-15T04:40:26.749643861Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 15 04:40:26.753973 containerd[1881]: time="2025-07-15T04:40:26.753918115Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:26.758118 containerd[1881]: time="2025-07-15T04:40:26.758086534Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:26.758873 containerd[1881]: time="2025-07-15T04:40:26.758845302Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.794449863s" Jul 15 04:40:26.758912 containerd[1881]: time="2025-07-15T04:40:26.758882159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 15 04:40:26.770935 containerd[1881]: time="2025-07-15T04:40:26.770898330Z" level=info msg="CreateContainer within sandbox \"bd87c5d96887021cbcb1da24499555295215c066f861e8a112739571a21d05f3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 15 04:40:26.803733 containerd[1881]: time="2025-07-15T04:40:26.803052581Z" level=info msg="Container 7e8598f0d225a45509745d82fc77510c333957f995672ec9ba90aaf79bece612: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:40:26.832393 containerd[1881]: time="2025-07-15T04:40:26.832235316Z" level=info msg="CreateContainer within sandbox \"bd87c5d96887021cbcb1da24499555295215c066f861e8a112739571a21d05f3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7e8598f0d225a45509745d82fc77510c333957f995672ec9ba90aaf79bece612\"" Jul 15 04:40:26.832895 containerd[1881]: time="2025-07-15T04:40:26.832870944Z" level=info msg="StartContainer for \"7e8598f0d225a45509745d82fc77510c333957f995672ec9ba90aaf79bece612\"" Jul 15 04:40:26.834344 containerd[1881]: time="2025-07-15T04:40:26.834293932Z" level=info msg="connecting to shim 7e8598f0d225a45509745d82fc77510c333957f995672ec9ba90aaf79bece612" address="unix:///run/containerd/s/d7380c0097708396908ee1a93b3adc41308a7adbf316a706fc7cafdf997bb9da" protocol=ttrpc version=3 Jul 15 04:40:26.850853 systemd[1]: Started cri-containerd-7e8598f0d225a45509745d82fc77510c333957f995672ec9ba90aaf79bece612.scope - libcontainer container 7e8598f0d225a45509745d82fc77510c333957f995672ec9ba90aaf79bece612. Jul 15 04:40:26.874019 kubelet[3314]: E0715 04:40:26.873277 3314 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qm58g" podUID="e8d9b3f7-0f5c-46b4-bf2e-d2353a721e27" Jul 15 04:40:26.884128 containerd[1881]: time="2025-07-15T04:40:26.884095732Z" level=info msg="StartContainer for \"7e8598f0d225a45509745d82fc77510c333957f995672ec9ba90aaf79bece612\" returns successfully" Jul 15 04:40:27.973569 systemd[1]: cri-containerd-7e8598f0d225a45509745d82fc77510c333957f995672ec9ba90aaf79bece612.scope: Deactivated successfully. Jul 15 04:40:27.974260 systemd[1]: cri-containerd-7e8598f0d225a45509745d82fc77510c333957f995672ec9ba90aaf79bece612.scope: Consumed 294ms CPU time, 193.2M memory peak, 165.8M written to disk. Jul 15 04:40:27.975122 containerd[1881]: time="2025-07-15T04:40:27.974628232Z" level=info msg="received exit event container_id:\"7e8598f0d225a45509745d82fc77510c333957f995672ec9ba90aaf79bece612\" id:\"7e8598f0d225a45509745d82fc77510c333957f995672ec9ba90aaf79bece612\" pid:4115 exited_at:{seconds:1752554427 nanos:973308671}" Jul 15 04:40:27.975122 containerd[1881]: time="2025-07-15T04:40:27.975091735Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7e8598f0d225a45509745d82fc77510c333957f995672ec9ba90aaf79bece612\" id:\"7e8598f0d225a45509745d82fc77510c333957f995672ec9ba90aaf79bece612\" pid:4115 exited_at:{seconds:1752554427 nanos:973308671}" Jul 15 04:40:27.993809 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e8598f0d225a45509745d82fc77510c333957f995672ec9ba90aaf79bece612-rootfs.mount: Deactivated successfully. Jul 15 04:40:28.054733 kubelet[3314]: I0715 04:40:28.054669 3314 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 15 04:40:28.459597 systemd[1]: Created slice kubepods-burstable-pod620101e4_1e3b_45da_9e88_0454710f6851.slice - libcontainer container kubepods-burstable-pod620101e4_1e3b_45da_9e88_0454710f6851.slice. Jul 15 04:40:28.552787 kubelet[3314]: I0715 04:40:28.552694 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c9dx\" (UniqueName: \"kubernetes.io/projected/620101e4-1e3b-45da-9e88-0454710f6851-kube-api-access-4c9dx\") pod \"coredns-674b8bbfcf-8pnv9\" (UID: \"620101e4-1e3b-45da-9e88-0454710f6851\") " pod="kube-system/coredns-674b8bbfcf-8pnv9" Jul 15 04:40:28.552787 kubelet[3314]: I0715 04:40:28.552784 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/620101e4-1e3b-45da-9e88-0454710f6851-config-volume\") pod \"coredns-674b8bbfcf-8pnv9\" (UID: \"620101e4-1e3b-45da-9e88-0454710f6851\") " pod="kube-system/coredns-674b8bbfcf-8pnv9" Jul 15 04:40:28.854876 kubelet[3314]: I0715 04:40:28.854814 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/beb9ae68-ee75-43e3-9ce6-011d474fdc7f-tigera-ca-bundle\") pod \"calico-kube-controllers-79d7f47d74-kwq9v\" (UID: \"beb9ae68-ee75-43e3-9ce6-011d474fdc7f\") " pod="calico-system/calico-kube-controllers-79d7f47d74-kwq9v" Jul 15 04:40:28.854876 kubelet[3314]: I0715 04:40:28.854843 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ls5h\" (UniqueName: \"kubernetes.io/projected/beb9ae68-ee75-43e3-9ce6-011d474fdc7f-kube-api-access-8ls5h\") pod \"calico-kube-controllers-79d7f47d74-kwq9v\" (UID: \"beb9ae68-ee75-43e3-9ce6-011d474fdc7f\") " pod="calico-system/calico-kube-controllers-79d7f47d74-kwq9v" Jul 15 04:40:28.862041 systemd[1]: Created slice kubepods-besteffort-podbeb9ae68_ee75_43e3_9ce6_011d474fdc7f.slice - libcontainer container kubepods-besteffort-podbeb9ae68_ee75_43e3_9ce6_011d474fdc7f.slice. Jul 15 04:40:28.896516 systemd[1]: Created slice kubepods-besteffort-pod0a359b13_17a4_4c68_8017_804e0476456e.slice - libcontainer container kubepods-besteffort-pod0a359b13_17a4_4c68_8017_804e0476456e.slice. Jul 15 04:40:28.901143 systemd[1]: Created slice kubepods-burstable-pod75c359f9_ab12_4f43_8af1_e72044fb1241.slice - libcontainer container kubepods-burstable-pod75c359f9_ab12_4f43_8af1_e72044fb1241.slice. Jul 15 04:40:28.907067 systemd[1]: Created slice kubepods-besteffort-pod27194450_2b09_43c8_89e6_f093b196899d.slice - libcontainer container kubepods-besteffort-pod27194450_2b09_43c8_89e6_f093b196899d.slice. Jul 15 04:40:28.915743 systemd[1]: Created slice kubepods-besteffort-podd95d7ea8_de25_4f60_8023_f94edf2aef6d.slice - libcontainer container kubepods-besteffort-podd95d7ea8_de25_4f60_8023_f94edf2aef6d.slice. Jul 15 04:40:28.930569 systemd[1]: Created slice kubepods-besteffort-pode8d9b3f7_0f5c_46b4_bf2e_d2353a721e27.slice - libcontainer container kubepods-besteffort-pode8d9b3f7_0f5c_46b4_bf2e_d2353a721e27.slice. Jul 15 04:40:28.940674 containerd[1881]: time="2025-07-15T04:40:28.940635711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qm58g,Uid:e8d9b3f7-0f5c-46b4-bf2e-d2353a721e27,Namespace:calico-system,Attempt:0,}" Jul 15 04:40:28.945700 systemd[1]: Created slice kubepods-besteffort-pod25ef5024_8c17_4982_9c07_f2b3a463d13a.slice - libcontainer container kubepods-besteffort-pod25ef5024_8c17_4982_9c07_f2b3a463d13a.slice. Jul 15 04:40:28.955787 kubelet[3314]: I0715 04:40:28.955369 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pjkn\" (UniqueName: \"kubernetes.io/projected/75c359f9-ab12-4f43-8af1-e72044fb1241-kube-api-access-5pjkn\") pod \"coredns-674b8bbfcf-jjvzx\" (UID: \"75c359f9-ab12-4f43-8af1-e72044fb1241\") " pod="kube-system/coredns-674b8bbfcf-jjvzx" Jul 15 04:40:28.955787 kubelet[3314]: I0715 04:40:28.955402 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb2z6\" (UniqueName: \"kubernetes.io/projected/27194450-2b09-43c8-89e6-f093b196899d-kube-api-access-nb2z6\") pod \"calico-apiserver-664ff4b8b7-wb4g9\" (UID: \"27194450-2b09-43c8-89e6-f093b196899d\") " pod="calico-apiserver/calico-apiserver-664ff4b8b7-wb4g9" Jul 15 04:40:28.955787 kubelet[3314]: I0715 04:40:28.955413 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d95d7ea8-de25-4f60-8023-f94edf2aef6d-whisker-backend-key-pair\") pod \"whisker-66cff685d7-bngpc\" (UID: \"d95d7ea8-de25-4f60-8023-f94edf2aef6d\") " pod="calico-system/whisker-66cff685d7-bngpc" Jul 15 04:40:28.955787 kubelet[3314]: I0715 04:40:28.955425 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25ef5024-8c17-4982-9c07-f2b3a463d13a-config\") pod \"goldmane-768f4c5c69-j8nkz\" (UID: \"25ef5024-8c17-4982-9c07-f2b3a463d13a\") " pod="calico-system/goldmane-768f4c5c69-j8nkz" Jul 15 04:40:28.955787 kubelet[3314]: I0715 04:40:28.955436 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25ef5024-8c17-4982-9c07-f2b3a463d13a-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-j8nkz\" (UID: \"25ef5024-8c17-4982-9c07-f2b3a463d13a\") " pod="calico-system/goldmane-768f4c5c69-j8nkz" Jul 15 04:40:28.955940 kubelet[3314]: I0715 04:40:28.955447 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/27194450-2b09-43c8-89e6-f093b196899d-calico-apiserver-certs\") pod \"calico-apiserver-664ff4b8b7-wb4g9\" (UID: \"27194450-2b09-43c8-89e6-f093b196899d\") " pod="calico-apiserver/calico-apiserver-664ff4b8b7-wb4g9" Jul 15 04:40:28.955940 kubelet[3314]: I0715 04:40:28.955456 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xpfq\" (UniqueName: \"kubernetes.io/projected/d95d7ea8-de25-4f60-8023-f94edf2aef6d-kube-api-access-2xpfq\") pod \"whisker-66cff685d7-bngpc\" (UID: \"d95d7ea8-de25-4f60-8023-f94edf2aef6d\") " pod="calico-system/whisker-66cff685d7-bngpc" Jul 15 04:40:28.955940 kubelet[3314]: I0715 04:40:28.955466 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0a359b13-17a4-4c68-8017-804e0476456e-calico-apiserver-certs\") pod \"calico-apiserver-664ff4b8b7-qj8jc\" (UID: \"0a359b13-17a4-4c68-8017-804e0476456e\") " pod="calico-apiserver/calico-apiserver-664ff4b8b7-qj8jc" Jul 15 04:40:28.955940 kubelet[3314]: I0715 04:40:28.955477 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d95d7ea8-de25-4f60-8023-f94edf2aef6d-whisker-ca-bundle\") pod \"whisker-66cff685d7-bngpc\" (UID: \"d95d7ea8-de25-4f60-8023-f94edf2aef6d\") " pod="calico-system/whisker-66cff685d7-bngpc" Jul 15 04:40:28.955940 kubelet[3314]: I0715 04:40:28.955495 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/25ef5024-8c17-4982-9c07-f2b3a463d13a-goldmane-key-pair\") pod \"goldmane-768f4c5c69-j8nkz\" (UID: \"25ef5024-8c17-4982-9c07-f2b3a463d13a\") " pod="calico-system/goldmane-768f4c5c69-j8nkz" Jul 15 04:40:28.956017 kubelet[3314]: I0715 04:40:28.955506 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75c359f9-ab12-4f43-8af1-e72044fb1241-config-volume\") pod \"coredns-674b8bbfcf-jjvzx\" (UID: \"75c359f9-ab12-4f43-8af1-e72044fb1241\") " pod="kube-system/coredns-674b8bbfcf-jjvzx" Jul 15 04:40:28.956017 kubelet[3314]: I0715 04:40:28.955516 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9gvq\" (UniqueName: \"kubernetes.io/projected/25ef5024-8c17-4982-9c07-f2b3a463d13a-kube-api-access-v9gvq\") pod \"goldmane-768f4c5c69-j8nkz\" (UID: \"25ef5024-8c17-4982-9c07-f2b3a463d13a\") " pod="calico-system/goldmane-768f4c5c69-j8nkz" Jul 15 04:40:28.956017 kubelet[3314]: I0715 04:40:28.955524 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6fwh\" (UniqueName: \"kubernetes.io/projected/0a359b13-17a4-4c68-8017-804e0476456e-kube-api-access-b6fwh\") pod \"calico-apiserver-664ff4b8b7-qj8jc\" (UID: \"0a359b13-17a4-4c68-8017-804e0476456e\") " pod="calico-apiserver/calico-apiserver-664ff4b8b7-qj8jc" Jul 15 04:40:28.982026 containerd[1881]: time="2025-07-15T04:40:28.981912985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 15 04:40:29.009304 containerd[1881]: time="2025-07-15T04:40:29.008806216Z" level=error msg="Failed to destroy network for sandbox \"c7683176a40882c641c806785bbe56f4db02c2703f62690bc9cacf45a9c9e1ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:29.010186 systemd[1]: run-netns-cni\x2dc9af9dcc\x2d869b\x2df9c1\x2df4ea\x2db0c31f3c3b6c.mount: Deactivated successfully. Jul 15 04:40:29.015984 containerd[1881]: time="2025-07-15T04:40:29.015945416Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qm58g,Uid:e8d9b3f7-0f5c-46b4-bf2e-d2353a721e27,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7683176a40882c641c806785bbe56f4db02c2703f62690bc9cacf45a9c9e1ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:29.016300 kubelet[3314]: E0715 04:40:29.016255 3314 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7683176a40882c641c806785bbe56f4db02c2703f62690bc9cacf45a9c9e1ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:29.016430 kubelet[3314]: E0715 04:40:29.016413 3314 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7683176a40882c641c806785bbe56f4db02c2703f62690bc9cacf45a9c9e1ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qm58g" Jul 15 04:40:29.016508 kubelet[3314]: E0715 04:40:29.016492 3314 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7683176a40882c641c806785bbe56f4db02c2703f62690bc9cacf45a9c9e1ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qm58g" Jul 15 04:40:29.016632 kubelet[3314]: E0715 04:40:29.016601 3314 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qm58g_calico-system(e8d9b3f7-0f5c-46b4-bf2e-d2353a721e27)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qm58g_calico-system(e8d9b3f7-0f5c-46b4-bf2e-d2353a721e27)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c7683176a40882c641c806785bbe56f4db02c2703f62690bc9cacf45a9c9e1ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qm58g" podUID="e8d9b3f7-0f5c-46b4-bf2e-d2353a721e27" Jul 15 04:40:29.070234 containerd[1881]: time="2025-07-15T04:40:29.069983917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8pnv9,Uid:620101e4-1e3b-45da-9e88-0454710f6851,Namespace:kube-system,Attempt:0,}" Jul 15 04:40:29.121872 containerd[1881]: time="2025-07-15T04:40:29.121141639Z" level=error msg="Failed to destroy network for sandbox \"ba12bcd46fb1a478f15a71d35146f9f027fdb44c2492c3728b7d0031fbe3ee63\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:29.127137 containerd[1881]: time="2025-07-15T04:40:29.127103130Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8pnv9,Uid:620101e4-1e3b-45da-9e88-0454710f6851,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba12bcd46fb1a478f15a71d35146f9f027fdb44c2492c3728b7d0031fbe3ee63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:29.127494 kubelet[3314]: E0715 04:40:29.127458 3314 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba12bcd46fb1a478f15a71d35146f9f027fdb44c2492c3728b7d0031fbe3ee63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:29.127896 kubelet[3314]: E0715 04:40:29.127517 3314 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba12bcd46fb1a478f15a71d35146f9f027fdb44c2492c3728b7d0031fbe3ee63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8pnv9" Jul 15 04:40:29.127896 kubelet[3314]: E0715 04:40:29.127535 3314 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba12bcd46fb1a478f15a71d35146f9f027fdb44c2492c3728b7d0031fbe3ee63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8pnv9" Jul 15 04:40:29.127896 kubelet[3314]: E0715 04:40:29.127580 3314 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-8pnv9_kube-system(620101e4-1e3b-45da-9e88-0454710f6851)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-8pnv9_kube-system(620101e4-1e3b-45da-9e88-0454710f6851)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba12bcd46fb1a478f15a71d35146f9f027fdb44c2492c3728b7d0031fbe3ee63\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-8pnv9" podUID="620101e4-1e3b-45da-9e88-0454710f6851" Jul 15 04:40:29.179526 containerd[1881]: time="2025-07-15T04:40:29.179476674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79d7f47d74-kwq9v,Uid:beb9ae68-ee75-43e3-9ce6-011d474fdc7f,Namespace:calico-system,Attempt:0,}" Jul 15 04:40:29.205157 containerd[1881]: time="2025-07-15T04:40:29.204938860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jjvzx,Uid:75c359f9-ab12-4f43-8af1-e72044fb1241,Namespace:kube-system,Attempt:0,}" Jul 15 04:40:29.205511 containerd[1881]: time="2025-07-15T04:40:29.205494053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-664ff4b8b7-qj8jc,Uid:0a359b13-17a4-4c68-8017-804e0476456e,Namespace:calico-apiserver,Attempt:0,}" Jul 15 04:40:29.210602 containerd[1881]: time="2025-07-15T04:40:29.210581189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-664ff4b8b7-wb4g9,Uid:27194450-2b09-43c8-89e6-f093b196899d,Namespace:calico-apiserver,Attempt:0,}" Jul 15 04:40:29.222170 containerd[1881]: time="2025-07-15T04:40:29.222142049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66cff685d7-bngpc,Uid:d95d7ea8-de25-4f60-8023-f94edf2aef6d,Namespace:calico-system,Attempt:0,}" Jul 15 04:40:29.235737 containerd[1881]: time="2025-07-15T04:40:29.235641610Z" level=error msg="Failed to destroy network for sandbox \"f42f1faf9d1a44bc4ed59d87554ce9769b2829dfcddf8bb448f89efdcdad400e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:29.250467 containerd[1881]: time="2025-07-15T04:40:29.250402842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-j8nkz,Uid:25ef5024-8c17-4982-9c07-f2b3a463d13a,Namespace:calico-system,Attempt:0,}" Jul 15 04:40:29.268337 containerd[1881]: time="2025-07-15T04:40:29.268228147Z" level=error msg="Failed to destroy network for sandbox \"79106a54f891bb0975c64e8e0b83e8a21758d0111865b8c479fcc81d3b238568\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:29.313900 containerd[1881]: time="2025-07-15T04:40:29.313852407Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79d7f47d74-kwq9v,Uid:beb9ae68-ee75-43e3-9ce6-011d474fdc7f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f42f1faf9d1a44bc4ed59d87554ce9769b2829dfcddf8bb448f89efdcdad400e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:29.314411 kubelet[3314]: E0715 04:40:29.314361 3314 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f42f1faf9d1a44bc4ed59d87554ce9769b2829dfcddf8bb448f89efdcdad400e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:29.314488 kubelet[3314]: E0715 04:40:29.314431 3314 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f42f1faf9d1a44bc4ed59d87554ce9769b2829dfcddf8bb448f89efdcdad400e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79d7f47d74-kwq9v" Jul 15 04:40:29.314488 kubelet[3314]: E0715 04:40:29.314449 3314 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f42f1faf9d1a44bc4ed59d87554ce9769b2829dfcddf8bb448f89efdcdad400e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79d7f47d74-kwq9v" Jul 15 04:40:29.314535 kubelet[3314]: E0715 04:40:29.314496 3314 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-79d7f47d74-kwq9v_calico-system(beb9ae68-ee75-43e3-9ce6-011d474fdc7f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-79d7f47d74-kwq9v_calico-system(beb9ae68-ee75-43e3-9ce6-011d474fdc7f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f42f1faf9d1a44bc4ed59d87554ce9769b2829dfcddf8bb448f89efdcdad400e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79d7f47d74-kwq9v" podUID="beb9ae68-ee75-43e3-9ce6-011d474fdc7f" Jul 15 04:40:29.330168 containerd[1881]: time="2025-07-15T04:40:29.330063365Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jjvzx,Uid:75c359f9-ab12-4f43-8af1-e72044fb1241,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"79106a54f891bb0975c64e8e0b83e8a21758d0111865b8c479fcc81d3b238568\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:29.330321 kubelet[3314]: E0715 04:40:29.330273 3314 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79106a54f891bb0975c64e8e0b83e8a21758d0111865b8c479fcc81d3b238568\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:29.330359 kubelet[3314]: E0715 04:40:29.330334 3314 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79106a54f891bb0975c64e8e0b83e8a21758d0111865b8c479fcc81d3b238568\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jjvzx" Jul 15 04:40:29.330359 kubelet[3314]: E0715 04:40:29.330352 3314 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79106a54f891bb0975c64e8e0b83e8a21758d0111865b8c479fcc81d3b238568\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jjvzx" Jul 15 04:40:29.330452 kubelet[3314]: E0715 04:40:29.330397 3314 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-jjvzx_kube-system(75c359f9-ab12-4f43-8af1-e72044fb1241)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-jjvzx_kube-system(75c359f9-ab12-4f43-8af1-e72044fb1241)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79106a54f891bb0975c64e8e0b83e8a21758d0111865b8c479fcc81d3b238568\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-jjvzx" podUID="75c359f9-ab12-4f43-8af1-e72044fb1241" Jul 15 04:40:29.355817 containerd[1881]: time="2025-07-15T04:40:29.355777014Z" level=error msg="Failed to destroy network for sandbox \"3c182ec3abfe1ce1e8624d9a7f1fdeada27aa12438d6c8d68bf7119d60dcd50c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:29.372943 containerd[1881]: time="2025-07-15T04:40:29.372369104Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-664ff4b8b7-qj8jc,Uid:0a359b13-17a4-4c68-8017-804e0476456e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c182ec3abfe1ce1e8624d9a7f1fdeada27aa12438d6c8d68bf7119d60dcd50c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:29.373054 kubelet[3314]: E0715 04:40:29.372620 3314 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c182ec3abfe1ce1e8624d9a7f1fdeada27aa12438d6c8d68bf7119d60dcd50c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:29.373054 kubelet[3314]: E0715 04:40:29.372676 3314 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c182ec3abfe1ce1e8624d9a7f1fdeada27aa12438d6c8d68bf7119d60dcd50c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-664ff4b8b7-qj8jc" Jul 15 04:40:29.373054 kubelet[3314]: E0715 04:40:29.372704 3314 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c182ec3abfe1ce1e8624d9a7f1fdeada27aa12438d6c8d68bf7119d60dcd50c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-664ff4b8b7-qj8jc" Jul 15 04:40:29.373318 kubelet[3314]: E0715 04:40:29.373085 3314 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-664ff4b8b7-qj8jc_calico-apiserver(0a359b13-17a4-4c68-8017-804e0476456e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-664ff4b8b7-qj8jc_calico-apiserver(0a359b13-17a4-4c68-8017-804e0476456e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c182ec3abfe1ce1e8624d9a7f1fdeada27aa12438d6c8d68bf7119d60dcd50c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-664ff4b8b7-qj8jc" podUID="0a359b13-17a4-4c68-8017-804e0476456e" Jul 15 04:40:29.382324 containerd[1881]: time="2025-07-15T04:40:29.382224110Z" level=error msg="Failed to destroy network for sandbox \"c73aafe785614f084cf1388b0368d50de9845eb4dd53badb1293bdce8e69400e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:29.389102 containerd[1881]: time="2025-07-15T04:40:29.389000388Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-664ff4b8b7-wb4g9,Uid:27194450-2b09-43c8-89e6-f093b196899d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c73aafe785614f084cf1388b0368d50de9845eb4dd53badb1293bdce8e69400e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:29.389277 kubelet[3314]: E0715 04:40:29.389195 3314 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c73aafe785614f084cf1388b0368d50de9845eb4dd53badb1293bdce8e69400e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:29.389277 kubelet[3314]: E0715 04:40:29.389241 3314 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c73aafe785614f084cf1388b0368d50de9845eb4dd53badb1293bdce8e69400e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-664ff4b8b7-wb4g9" Jul 15 04:40:29.389277 kubelet[3314]: E0715 04:40:29.389255 3314 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c73aafe785614f084cf1388b0368d50de9845eb4dd53badb1293bdce8e69400e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-664ff4b8b7-wb4g9" Jul 15 04:40:29.390402 kubelet[3314]: E0715 04:40:29.389297 3314 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-664ff4b8b7-wb4g9_calico-apiserver(27194450-2b09-43c8-89e6-f093b196899d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-664ff4b8b7-wb4g9_calico-apiserver(27194450-2b09-43c8-89e6-f093b196899d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c73aafe785614f084cf1388b0368d50de9845eb4dd53badb1293bdce8e69400e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-664ff4b8b7-wb4g9" podUID="27194450-2b09-43c8-89e6-f093b196899d" Jul 15 04:40:29.396457 containerd[1881]: time="2025-07-15T04:40:29.396374580Z" level=error msg="Failed to destroy network for sandbox \"79da53b930767ffc9f7cb5250033afa664f99e3a0f2ee13b1b821b0479315218\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:29.399908 containerd[1881]: time="2025-07-15T04:40:29.399880826Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66cff685d7-bngpc,Uid:d95d7ea8-de25-4f60-8023-f94edf2aef6d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"79da53b930767ffc9f7cb5250033afa664f99e3a0f2ee13b1b821b0479315218\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:29.400262 kubelet[3314]: E0715 04:40:29.400176 3314 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79da53b930767ffc9f7cb5250033afa664f99e3a0f2ee13b1b821b0479315218\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:29.400262 kubelet[3314]: E0715 04:40:29.400224 3314 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79da53b930767ffc9f7cb5250033afa664f99e3a0f2ee13b1b821b0479315218\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66cff685d7-bngpc" Jul 15 04:40:29.400262 kubelet[3314]: E0715 04:40:29.400237 3314 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79da53b930767ffc9f7cb5250033afa664f99e3a0f2ee13b1b821b0479315218\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66cff685d7-bngpc" Jul 15 04:40:29.401558 kubelet[3314]: E0715 04:40:29.400960 3314 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-66cff685d7-bngpc_calico-system(d95d7ea8-de25-4f60-8023-f94edf2aef6d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-66cff685d7-bngpc_calico-system(d95d7ea8-de25-4f60-8023-f94edf2aef6d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79da53b930767ffc9f7cb5250033afa664f99e3a0f2ee13b1b821b0479315218\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-66cff685d7-bngpc" podUID="d95d7ea8-de25-4f60-8023-f94edf2aef6d" Jul 15 04:40:29.410632 containerd[1881]: time="2025-07-15T04:40:29.410598403Z" level=error msg="Failed to destroy network for sandbox \"6ca344aa8e80d01dca468ca6fda3338dac938be3ded50a97fd2632f8c6097b6d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:29.418639 containerd[1881]: time="2025-07-15T04:40:29.418604359Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-j8nkz,Uid:25ef5024-8c17-4982-9c07-f2b3a463d13a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ca344aa8e80d01dca468ca6fda3338dac938be3ded50a97fd2632f8c6097b6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:29.418966 kubelet[3314]: E0715 04:40:29.418931 3314 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ca344aa8e80d01dca468ca6fda3338dac938be3ded50a97fd2632f8c6097b6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:29.419085 kubelet[3314]: E0715 04:40:29.419068 3314 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ca344aa8e80d01dca468ca6fda3338dac938be3ded50a97fd2632f8c6097b6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-j8nkz" Jul 15 04:40:29.419206 kubelet[3314]: E0715 04:40:29.419130 3314 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ca344aa8e80d01dca468ca6fda3338dac938be3ded50a97fd2632f8c6097b6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-j8nkz" Jul 15 04:40:29.419292 kubelet[3314]: E0715 04:40:29.419269 3314 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-j8nkz_calico-system(25ef5024-8c17-4982-9c07-f2b3a463d13a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-j8nkz_calico-system(25ef5024-8c17-4982-9c07-f2b3a463d13a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ca344aa8e80d01dca468ca6fda3338dac938be3ded50a97fd2632f8c6097b6d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-j8nkz" podUID="25ef5024-8c17-4982-9c07-f2b3a463d13a" Jul 15 04:40:30.012817 systemd[1]: run-netns-cni\x2d52041a79\x2de967\x2de1c0\x2d0360\x2d6e5653a00427.mount: Deactivated successfully. Jul 15 04:40:35.236919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1256409297.mount: Deactivated successfully. Jul 15 04:40:35.539766 containerd[1881]: time="2025-07-15T04:40:35.539700490Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:35.546908 containerd[1881]: time="2025-07-15T04:40:35.546867111Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 15 04:40:35.551968 containerd[1881]: time="2025-07-15T04:40:35.551921211Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:35.556757 containerd[1881]: time="2025-07-15T04:40:35.556720031Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:35.557210 containerd[1881]: time="2025-07-15T04:40:35.556964751Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 6.572562318s" Jul 15 04:40:35.557210 containerd[1881]: time="2025-07-15T04:40:35.556995600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 15 04:40:35.573632 containerd[1881]: time="2025-07-15T04:40:35.573604560Z" level=info msg="CreateContainer within sandbox \"bd87c5d96887021cbcb1da24499555295215c066f861e8a112739571a21d05f3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 15 04:40:35.602789 containerd[1881]: time="2025-07-15T04:40:35.602692104Z" level=info msg="Container caae5995f17c26e5096c534d25ca03902b4763e6d45ef0b01174f6d28aac0956: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:40:35.633464 containerd[1881]: time="2025-07-15T04:40:35.633426028Z" level=info msg="CreateContainer within sandbox \"bd87c5d96887021cbcb1da24499555295215c066f861e8a112739571a21d05f3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"caae5995f17c26e5096c534d25ca03902b4763e6d45ef0b01174f6d28aac0956\"" Jul 15 04:40:35.634141 containerd[1881]: time="2025-07-15T04:40:35.634082928Z" level=info msg="StartContainer for \"caae5995f17c26e5096c534d25ca03902b4763e6d45ef0b01174f6d28aac0956\"" Jul 15 04:40:35.636964 containerd[1881]: time="2025-07-15T04:40:35.636909039Z" level=info msg="connecting to shim caae5995f17c26e5096c534d25ca03902b4763e6d45ef0b01174f6d28aac0956" address="unix:///run/containerd/s/d7380c0097708396908ee1a93b3adc41308a7adbf316a706fc7cafdf997bb9da" protocol=ttrpc version=3 Jul 15 04:40:35.656836 systemd[1]: Started cri-containerd-caae5995f17c26e5096c534d25ca03902b4763e6d45ef0b01174f6d28aac0956.scope - libcontainer container caae5995f17c26e5096c534d25ca03902b4763e6d45ef0b01174f6d28aac0956. Jul 15 04:40:35.688737 containerd[1881]: time="2025-07-15T04:40:35.688601489Z" level=info msg="StartContainer for \"caae5995f17c26e5096c534d25ca03902b4763e6d45ef0b01174f6d28aac0956\" returns successfully" Jul 15 04:40:35.940981 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 15 04:40:35.941105 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 15 04:40:36.031914 kubelet[3314]: I0715 04:40:36.031858 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6278d" podStartSLOduration=1.358640195 podStartE2EDuration="19.031741573s" podCreationTimestamp="2025-07-15 04:40:17 +0000 UTC" firstStartedPulling="2025-07-15 04:40:17.884481776 +0000 UTC m=+21.082212506" lastFinishedPulling="2025-07-15 04:40:35.557583154 +0000 UTC m=+38.755313884" observedRunningTime="2025-07-15 04:40:36.02236386 +0000 UTC m=+39.220094590" watchObservedRunningTime="2025-07-15 04:40:36.031741573 +0000 UTC m=+39.229472303" Jul 15 04:40:36.095374 kubelet[3314]: I0715 04:40:36.095053 3314 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xpfq\" (UniqueName: \"kubernetes.io/projected/d95d7ea8-de25-4f60-8023-f94edf2aef6d-kube-api-access-2xpfq\") pod \"d95d7ea8-de25-4f60-8023-f94edf2aef6d\" (UID: \"d95d7ea8-de25-4f60-8023-f94edf2aef6d\") " Jul 15 04:40:36.095374 kubelet[3314]: I0715 04:40:36.095093 3314 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d95d7ea8-de25-4f60-8023-f94edf2aef6d-whisker-backend-key-pair\") pod \"d95d7ea8-de25-4f60-8023-f94edf2aef6d\" (UID: \"d95d7ea8-de25-4f60-8023-f94edf2aef6d\") " Jul 15 04:40:36.095374 kubelet[3314]: I0715 04:40:36.095120 3314 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d95d7ea8-de25-4f60-8023-f94edf2aef6d-whisker-ca-bundle\") pod \"d95d7ea8-de25-4f60-8023-f94edf2aef6d\" (UID: \"d95d7ea8-de25-4f60-8023-f94edf2aef6d\") " Jul 15 04:40:36.098791 kubelet[3314]: I0715 04:40:36.097943 3314 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d95d7ea8-de25-4f60-8023-f94edf2aef6d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d95d7ea8-de25-4f60-8023-f94edf2aef6d" (UID: "d95d7ea8-de25-4f60-8023-f94edf2aef6d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 15 04:40:36.099889 kubelet[3314]: I0715 04:40:36.099674 3314 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d95d7ea8-de25-4f60-8023-f94edf2aef6d-kube-api-access-2xpfq" (OuterVolumeSpecName: "kube-api-access-2xpfq") pod "d95d7ea8-de25-4f60-8023-f94edf2aef6d" (UID: "d95d7ea8-de25-4f60-8023-f94edf2aef6d"). InnerVolumeSpecName "kube-api-access-2xpfq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 04:40:36.101926 kubelet[3314]: I0715 04:40:36.101890 3314 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d95d7ea8-de25-4f60-8023-f94edf2aef6d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d95d7ea8-de25-4f60-8023-f94edf2aef6d" (UID: "d95d7ea8-de25-4f60-8023-f94edf2aef6d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 15 04:40:36.102002 containerd[1881]: time="2025-07-15T04:40:36.101943218Z" level=info msg="TaskExit event in podsandbox handler container_id:\"caae5995f17c26e5096c534d25ca03902b4763e6d45ef0b01174f6d28aac0956\" id:\"9fdf66ff0317412ce8a9e2ab9725bbd329c4f9db5700c5e9ed9de1bfd7fd1d75\" pid:4428 exit_status:1 exited_at:{seconds:1752554436 nanos:101357208}" Jul 15 04:40:36.195457 kubelet[3314]: I0715 04:40:36.195377 3314 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d95d7ea8-de25-4f60-8023-f94edf2aef6d-whisker-backend-key-pair\") on node \"ci-4396.0.0-n-16ec4aa50e\" DevicePath \"\"" Jul 15 04:40:36.195457 kubelet[3314]: I0715 04:40:36.195416 3314 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d95d7ea8-de25-4f60-8023-f94edf2aef6d-whisker-ca-bundle\") on node \"ci-4396.0.0-n-16ec4aa50e\" DevicePath \"\"" Jul 15 04:40:36.195457 kubelet[3314]: I0715 04:40:36.195426 3314 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2xpfq\" (UniqueName: \"kubernetes.io/projected/d95d7ea8-de25-4f60-8023-f94edf2aef6d-kube-api-access-2xpfq\") on node \"ci-4396.0.0-n-16ec4aa50e\" DevicePath \"\"" Jul 15 04:40:36.237661 systemd[1]: var-lib-kubelet-pods-d95d7ea8\x2dde25\x2d4f60\x2d8023\x2df94edf2aef6d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2xpfq.mount: Deactivated successfully. Jul 15 04:40:36.238024 systemd[1]: var-lib-kubelet-pods-d95d7ea8\x2dde25\x2d4f60\x2d8023\x2df94edf2aef6d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 15 04:40:36.876788 systemd[1]: Removed slice kubepods-besteffort-podd95d7ea8_de25_4f60_8023_f94edf2aef6d.slice - libcontainer container kubepods-besteffort-podd95d7ea8_de25_4f60_8023_f94edf2aef6d.slice. Jul 15 04:40:37.077242 systemd[1]: Created slice kubepods-besteffort-pod6b99de8e_095e_4e5d_890e_fd6b167dcde6.slice - libcontainer container kubepods-besteffort-pod6b99de8e_095e_4e5d_890e_fd6b167dcde6.slice. Jul 15 04:40:37.092730 containerd[1881]: time="2025-07-15T04:40:37.092680614Z" level=info msg="TaskExit event in podsandbox handler container_id:\"caae5995f17c26e5096c534d25ca03902b4763e6d45ef0b01174f6d28aac0956\" id:\"1c8431d93ed9dcdbc36f10c89eff0a2b56fe33e97bff22a71c9bfdff0d5c0c72\" pid:4472 exit_status:1 exited_at:{seconds:1752554437 nanos:91676959}" Jul 15 04:40:37.098730 kubelet[3314]: I0715 04:40:37.098655 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b99de8e-095e-4e5d-890e-fd6b167dcde6-whisker-ca-bundle\") pod \"whisker-6c4b8cc5c-rnbht\" (UID: \"6b99de8e-095e-4e5d-890e-fd6b167dcde6\") " pod="calico-system/whisker-6c4b8cc5c-rnbht" Jul 15 04:40:37.098730 kubelet[3314]: I0715 04:40:37.098696 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdsr2\" (UniqueName: \"kubernetes.io/projected/6b99de8e-095e-4e5d-890e-fd6b167dcde6-kube-api-access-mdsr2\") pod \"whisker-6c4b8cc5c-rnbht\" (UID: \"6b99de8e-095e-4e5d-890e-fd6b167dcde6\") " pod="calico-system/whisker-6c4b8cc5c-rnbht" Jul 15 04:40:37.099499 kubelet[3314]: I0715 04:40:37.098817 3314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6b99de8e-095e-4e5d-890e-fd6b167dcde6-whisker-backend-key-pair\") pod \"whisker-6c4b8cc5c-rnbht\" (UID: \"6b99de8e-095e-4e5d-890e-fd6b167dcde6\") " pod="calico-system/whisker-6c4b8cc5c-rnbht" Jul 15 04:40:37.381730 containerd[1881]: time="2025-07-15T04:40:37.381511487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c4b8cc5c-rnbht,Uid:6b99de8e-095e-4e5d-890e-fd6b167dcde6,Namespace:calico-system,Attempt:0,}" Jul 15 04:40:37.472120 systemd-networkd[1618]: calif57342dce11: Link UP Jul 15 04:40:37.473074 systemd-networkd[1618]: calif57342dce11: Gained carrier Jul 15 04:40:37.493918 containerd[1881]: 2025-07-15 04:40:37.410 [INFO][4582] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 04:40:37.493918 containerd[1881]: 2025-07-15 04:40:37.422 [INFO][4582] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4396.0.0--n--16ec4aa50e-k8s-whisker--6c4b8cc5c--rnbht-eth0 whisker-6c4b8cc5c- calico-system 6b99de8e-095e-4e5d-890e-fd6b167dcde6 880 0 2025-07-15 04:40:37 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6c4b8cc5c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4396.0.0-n-16ec4aa50e whisker-6c4b8cc5c-rnbht eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif57342dce11 [] [] }} ContainerID="71b1ba84f13ca9334bed52fd338017b5c85e806128ad4669c5ce6bbf0db2d570" Namespace="calico-system" Pod="whisker-6c4b8cc5c-rnbht" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-whisker--6c4b8cc5c--rnbht-" Jul 15 04:40:37.493918 containerd[1881]: 2025-07-15 04:40:37.422 [INFO][4582] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="71b1ba84f13ca9334bed52fd338017b5c85e806128ad4669c5ce6bbf0db2d570" Namespace="calico-system" Pod="whisker-6c4b8cc5c-rnbht" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-whisker--6c4b8cc5c--rnbht-eth0" Jul 15 04:40:37.493918 containerd[1881]: 2025-07-15 04:40:37.437 [INFO][4593] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="71b1ba84f13ca9334bed52fd338017b5c85e806128ad4669c5ce6bbf0db2d570" HandleID="k8s-pod-network.71b1ba84f13ca9334bed52fd338017b5c85e806128ad4669c5ce6bbf0db2d570" Workload="ci--4396.0.0--n--16ec4aa50e-k8s-whisker--6c4b8cc5c--rnbht-eth0" Jul 15 04:40:37.494234 containerd[1881]: 2025-07-15 04:40:37.437 [INFO][4593] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="71b1ba84f13ca9334bed52fd338017b5c85e806128ad4669c5ce6bbf0db2d570" HandleID="k8s-pod-network.71b1ba84f13ca9334bed52fd338017b5c85e806128ad4669c5ce6bbf0db2d570" Workload="ci--4396.0.0--n--16ec4aa50e-k8s-whisker--6c4b8cc5c--rnbht-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b180), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4396.0.0-n-16ec4aa50e", "pod":"whisker-6c4b8cc5c-rnbht", "timestamp":"2025-07-15 04:40:37.437760614 +0000 UTC"}, Hostname:"ci-4396.0.0-n-16ec4aa50e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:40:37.494234 containerd[1881]: 2025-07-15 04:40:37.437 [INFO][4593] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:40:37.494234 containerd[1881]: 2025-07-15 04:40:37.437 [INFO][4593] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:40:37.494234 containerd[1881]: 2025-07-15 04:40:37.437 [INFO][4593] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4396.0.0-n-16ec4aa50e' Jul 15 04:40:37.494234 containerd[1881]: 2025-07-15 04:40:37.443 [INFO][4593] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.71b1ba84f13ca9334bed52fd338017b5c85e806128ad4669c5ce6bbf0db2d570" host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:37.494234 containerd[1881]: 2025-07-15 04:40:37.446 [INFO][4593] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:37.494234 containerd[1881]: 2025-07-15 04:40:37.449 [INFO][4593] ipam/ipam.go 511: Trying affinity for 192.168.87.0/26 host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:37.494234 containerd[1881]: 2025-07-15 04:40:37.450 [INFO][4593] ipam/ipam.go 158: Attempting to load block cidr=192.168.87.0/26 host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:37.494234 containerd[1881]: 2025-07-15 04:40:37.452 [INFO][4593] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.87.0/26 host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:37.494857 containerd[1881]: 2025-07-15 04:40:37.452 [INFO][4593] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.87.0/26 handle="k8s-pod-network.71b1ba84f13ca9334bed52fd338017b5c85e806128ad4669c5ce6bbf0db2d570" host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:37.494857 containerd[1881]: 2025-07-15 04:40:37.453 [INFO][4593] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.71b1ba84f13ca9334bed52fd338017b5c85e806128ad4669c5ce6bbf0db2d570 Jul 15 04:40:37.494857 containerd[1881]: 2025-07-15 04:40:37.459 [INFO][4593] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.87.0/26 handle="k8s-pod-network.71b1ba84f13ca9334bed52fd338017b5c85e806128ad4669c5ce6bbf0db2d570" host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:37.494857 containerd[1881]: 2025-07-15 04:40:37.463 [INFO][4593] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.87.1/26] block=192.168.87.0/26 handle="k8s-pod-network.71b1ba84f13ca9334bed52fd338017b5c85e806128ad4669c5ce6bbf0db2d570" host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:37.494857 containerd[1881]: 2025-07-15 04:40:37.463 [INFO][4593] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.87.1/26] handle="k8s-pod-network.71b1ba84f13ca9334bed52fd338017b5c85e806128ad4669c5ce6bbf0db2d570" host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:37.494857 containerd[1881]: 2025-07-15 04:40:37.463 [INFO][4593] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:40:37.494857 containerd[1881]: 2025-07-15 04:40:37.464 [INFO][4593] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.1/26] IPv6=[] ContainerID="71b1ba84f13ca9334bed52fd338017b5c85e806128ad4669c5ce6bbf0db2d570" HandleID="k8s-pod-network.71b1ba84f13ca9334bed52fd338017b5c85e806128ad4669c5ce6bbf0db2d570" Workload="ci--4396.0.0--n--16ec4aa50e-k8s-whisker--6c4b8cc5c--rnbht-eth0" Jul 15 04:40:37.494951 containerd[1881]: 2025-07-15 04:40:37.466 [INFO][4582] cni-plugin/k8s.go 418: Populated endpoint ContainerID="71b1ba84f13ca9334bed52fd338017b5c85e806128ad4669c5ce6bbf0db2d570" Namespace="calico-system" Pod="whisker-6c4b8cc5c-rnbht" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-whisker--6c4b8cc5c--rnbht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--16ec4aa50e-k8s-whisker--6c4b8cc5c--rnbht-eth0", GenerateName:"whisker-6c4b8cc5c-", Namespace:"calico-system", SelfLink:"", UID:"6b99de8e-095e-4e5d-890e-fd6b167dcde6", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c4b8cc5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-16ec4aa50e", ContainerID:"", Pod:"whisker-6c4b8cc5c-rnbht", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.87.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif57342dce11", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:40:37.494951 containerd[1881]: 2025-07-15 04:40:37.466 [INFO][4582] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.1/32] ContainerID="71b1ba84f13ca9334bed52fd338017b5c85e806128ad4669c5ce6bbf0db2d570" Namespace="calico-system" Pod="whisker-6c4b8cc5c-rnbht" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-whisker--6c4b8cc5c--rnbht-eth0" Jul 15 04:40:37.495006 containerd[1881]: 2025-07-15 04:40:37.466 [INFO][4582] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif57342dce11 ContainerID="71b1ba84f13ca9334bed52fd338017b5c85e806128ad4669c5ce6bbf0db2d570" Namespace="calico-system" Pod="whisker-6c4b8cc5c-rnbht" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-whisker--6c4b8cc5c--rnbht-eth0" Jul 15 04:40:37.495006 containerd[1881]: 2025-07-15 04:40:37.474 [INFO][4582] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="71b1ba84f13ca9334bed52fd338017b5c85e806128ad4669c5ce6bbf0db2d570" Namespace="calico-system" Pod="whisker-6c4b8cc5c-rnbht" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-whisker--6c4b8cc5c--rnbht-eth0" Jul 15 04:40:37.495035 containerd[1881]: 2025-07-15 04:40:37.474 [INFO][4582] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="71b1ba84f13ca9334bed52fd338017b5c85e806128ad4669c5ce6bbf0db2d570" Namespace="calico-system" Pod="whisker-6c4b8cc5c-rnbht" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-whisker--6c4b8cc5c--rnbht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--16ec4aa50e-k8s-whisker--6c4b8cc5c--rnbht-eth0", GenerateName:"whisker-6c4b8cc5c-", Namespace:"calico-system", SelfLink:"", UID:"6b99de8e-095e-4e5d-890e-fd6b167dcde6", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c4b8cc5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-16ec4aa50e", ContainerID:"71b1ba84f13ca9334bed52fd338017b5c85e806128ad4669c5ce6bbf0db2d570", Pod:"whisker-6c4b8cc5c-rnbht", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.87.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif57342dce11", MAC:"ae:a9:88:1e:69:a1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:40:37.495067 containerd[1881]: 2025-07-15 04:40:37.489 [INFO][4582] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="71b1ba84f13ca9334bed52fd338017b5c85e806128ad4669c5ce6bbf0db2d570" Namespace="calico-system" Pod="whisker-6c4b8cc5c-rnbht" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-whisker--6c4b8cc5c--rnbht-eth0" Jul 15 04:40:37.551475 containerd[1881]: time="2025-07-15T04:40:37.550937583Z" level=info msg="connecting to shim 71b1ba84f13ca9334bed52fd338017b5c85e806128ad4669c5ce6bbf0db2d570" address="unix:///run/containerd/s/290b1bf2eab0593276b8624adee17fcb9fbfab9a0b877f6c0ad6e47a05c02657" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:40:37.571838 systemd[1]: Started cri-containerd-71b1ba84f13ca9334bed52fd338017b5c85e806128ad4669c5ce6bbf0db2d570.scope - libcontainer container 71b1ba84f13ca9334bed52fd338017b5c85e806128ad4669c5ce6bbf0db2d570. Jul 15 04:40:37.611899 containerd[1881]: time="2025-07-15T04:40:37.611806124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c4b8cc5c-rnbht,Uid:6b99de8e-095e-4e5d-890e-fd6b167dcde6,Namespace:calico-system,Attempt:0,} returns sandbox id \"71b1ba84f13ca9334bed52fd338017b5c85e806128ad4669c5ce6bbf0db2d570\"" Jul 15 04:40:37.613764 containerd[1881]: time="2025-07-15T04:40:37.613701390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 15 04:40:38.875842 kubelet[3314]: I0715 04:40:38.875673 3314 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d95d7ea8-de25-4f60-8023-f94edf2aef6d" path="/var/lib/kubelet/pods/d95d7ea8-de25-4f60-8023-f94edf2aef6d/volumes" Jul 15 04:40:39.011094 containerd[1881]: time="2025-07-15T04:40:39.010970096Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:39.015269 containerd[1881]: time="2025-07-15T04:40:39.015076175Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 15 04:40:39.020824 containerd[1881]: time="2025-07-15T04:40:39.020801199Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:39.025479 containerd[1881]: time="2025-07-15T04:40:39.025435110Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:39.025891 containerd[1881]: time="2025-07-15T04:40:39.025749408Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.411982088s" Jul 15 04:40:39.025891 containerd[1881]: time="2025-07-15T04:40:39.025775993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 15 04:40:39.034122 containerd[1881]: time="2025-07-15T04:40:39.034052256Z" level=info msg="CreateContainer within sandbox \"71b1ba84f13ca9334bed52fd338017b5c85e806128ad4669c5ce6bbf0db2d570\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 15 04:40:39.065250 containerd[1881]: time="2025-07-15T04:40:39.065217337Z" level=info msg="Container a06c709fa66df31411a9dce068d4fc18b8cb30ec484d251699e072132ea43799: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:40:39.089591 containerd[1881]: time="2025-07-15T04:40:39.089403203Z" level=info msg="CreateContainer within sandbox \"71b1ba84f13ca9334bed52fd338017b5c85e806128ad4669c5ce6bbf0db2d570\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"a06c709fa66df31411a9dce068d4fc18b8cb30ec484d251699e072132ea43799\"" Jul 15 04:40:39.089935 containerd[1881]: time="2025-07-15T04:40:39.089910466Z" level=info msg="StartContainer for \"a06c709fa66df31411a9dce068d4fc18b8cb30ec484d251699e072132ea43799\"" Jul 15 04:40:39.091087 containerd[1881]: time="2025-07-15T04:40:39.091062774Z" level=info msg="connecting to shim a06c709fa66df31411a9dce068d4fc18b8cb30ec484d251699e072132ea43799" address="unix:///run/containerd/s/290b1bf2eab0593276b8624adee17fcb9fbfab9a0b877f6c0ad6e47a05c02657" protocol=ttrpc version=3 Jul 15 04:40:39.109841 systemd[1]: Started cri-containerd-a06c709fa66df31411a9dce068d4fc18b8cb30ec484d251699e072132ea43799.scope - libcontainer container a06c709fa66df31411a9dce068d4fc18b8cb30ec484d251699e072132ea43799. Jul 15 04:40:39.143201 containerd[1881]: time="2025-07-15T04:40:39.143102874Z" level=info msg="StartContainer for \"a06c709fa66df31411a9dce068d4fc18b8cb30ec484d251699e072132ea43799\" returns successfully" Jul 15 04:40:39.146055 containerd[1881]: time="2025-07-15T04:40:39.146022404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 15 04:40:39.476882 systemd-networkd[1618]: calif57342dce11: Gained IPv6LL Jul 15 04:40:39.871069 containerd[1881]: time="2025-07-15T04:40:39.870878650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qm58g,Uid:e8d9b3f7-0f5c-46b4-bf2e-d2353a721e27,Namespace:calico-system,Attempt:0,}" Jul 15 04:40:39.968872 systemd-networkd[1618]: cali25a8b07bd8a: Link UP Jul 15 04:40:39.970160 systemd-networkd[1618]: cali25a8b07bd8a: Gained carrier Jul 15 04:40:39.984313 containerd[1881]: 2025-07-15 04:40:39.913 [INFO][4730] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 04:40:39.984313 containerd[1881]: 2025-07-15 04:40:39.922 [INFO][4730] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4396.0.0--n--16ec4aa50e-k8s-csi--node--driver--qm58g-eth0 csi-node-driver- calico-system e8d9b3f7-0f5c-46b4-bf2e-d2353a721e27 700 0 2025-07-15 04:40:17 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4396.0.0-n-16ec4aa50e csi-node-driver-qm58g eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali25a8b07bd8a [] [] }} ContainerID="329161e1a4116863c36ebe1c1bdf645edeb30bc1844912c4cabc12b306a558c8" Namespace="calico-system" Pod="csi-node-driver-qm58g" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-csi--node--driver--qm58g-" Jul 15 04:40:39.984313 containerd[1881]: 2025-07-15 04:40:39.922 [INFO][4730] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="329161e1a4116863c36ebe1c1bdf645edeb30bc1844912c4cabc12b306a558c8" Namespace="calico-system" Pod="csi-node-driver-qm58g" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-csi--node--driver--qm58g-eth0" Jul 15 04:40:39.984313 containerd[1881]: 2025-07-15 04:40:39.938 [INFO][4742] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="329161e1a4116863c36ebe1c1bdf645edeb30bc1844912c4cabc12b306a558c8" HandleID="k8s-pod-network.329161e1a4116863c36ebe1c1bdf645edeb30bc1844912c4cabc12b306a558c8" Workload="ci--4396.0.0--n--16ec4aa50e-k8s-csi--node--driver--qm58g-eth0" Jul 15 04:40:39.984486 containerd[1881]: 2025-07-15 04:40:39.938 [INFO][4742] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="329161e1a4116863c36ebe1c1bdf645edeb30bc1844912c4cabc12b306a558c8" HandleID="k8s-pod-network.329161e1a4116863c36ebe1c1bdf645edeb30bc1844912c4cabc12b306a558c8" Workload="ci--4396.0.0--n--16ec4aa50e-k8s-csi--node--driver--qm58g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b180), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4396.0.0-n-16ec4aa50e", "pod":"csi-node-driver-qm58g", "timestamp":"2025-07-15 04:40:39.938366379 +0000 UTC"}, Hostname:"ci-4396.0.0-n-16ec4aa50e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:40:39.984486 containerd[1881]: 2025-07-15 04:40:39.938 [INFO][4742] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:40:39.984486 containerd[1881]: 2025-07-15 04:40:39.938 [INFO][4742] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:40:39.984486 containerd[1881]: 2025-07-15 04:40:39.938 [INFO][4742] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4396.0.0-n-16ec4aa50e' Jul 15 04:40:39.984486 containerd[1881]: 2025-07-15 04:40:39.943 [INFO][4742] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.329161e1a4116863c36ebe1c1bdf645edeb30bc1844912c4cabc12b306a558c8" host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:39.984486 containerd[1881]: 2025-07-15 04:40:39.946 [INFO][4742] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:39.984486 containerd[1881]: 2025-07-15 04:40:39.949 [INFO][4742] ipam/ipam.go 511: Trying affinity for 192.168.87.0/26 host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:39.984486 containerd[1881]: 2025-07-15 04:40:39.950 [INFO][4742] ipam/ipam.go 158: Attempting to load block cidr=192.168.87.0/26 host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:39.984486 containerd[1881]: 2025-07-15 04:40:39.952 [INFO][4742] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.87.0/26 host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:39.984616 containerd[1881]: 2025-07-15 04:40:39.952 [INFO][4742] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.87.0/26 handle="k8s-pod-network.329161e1a4116863c36ebe1c1bdf645edeb30bc1844912c4cabc12b306a558c8" host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:39.984616 containerd[1881]: 2025-07-15 04:40:39.953 [INFO][4742] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.329161e1a4116863c36ebe1c1bdf645edeb30bc1844912c4cabc12b306a558c8 Jul 15 04:40:39.984616 containerd[1881]: 2025-07-15 04:40:39.956 [INFO][4742] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.87.0/26 handle="k8s-pod-network.329161e1a4116863c36ebe1c1bdf645edeb30bc1844912c4cabc12b306a558c8" host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:39.984616 containerd[1881]: 2025-07-15 04:40:39.962 [INFO][4742] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.87.2/26] block=192.168.87.0/26 handle="k8s-pod-network.329161e1a4116863c36ebe1c1bdf645edeb30bc1844912c4cabc12b306a558c8" host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:39.984616 containerd[1881]: 2025-07-15 04:40:39.963 [INFO][4742] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.87.2/26] handle="k8s-pod-network.329161e1a4116863c36ebe1c1bdf645edeb30bc1844912c4cabc12b306a558c8" host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:39.984616 containerd[1881]: 2025-07-15 04:40:39.963 [INFO][4742] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:40:39.984616 containerd[1881]: 2025-07-15 04:40:39.963 [INFO][4742] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.2/26] IPv6=[] ContainerID="329161e1a4116863c36ebe1c1bdf645edeb30bc1844912c4cabc12b306a558c8" HandleID="k8s-pod-network.329161e1a4116863c36ebe1c1bdf645edeb30bc1844912c4cabc12b306a558c8" Workload="ci--4396.0.0--n--16ec4aa50e-k8s-csi--node--driver--qm58g-eth0" Jul 15 04:40:39.984726 containerd[1881]: 2025-07-15 04:40:39.964 [INFO][4730] cni-plugin/k8s.go 418: Populated endpoint ContainerID="329161e1a4116863c36ebe1c1bdf645edeb30bc1844912c4cabc12b306a558c8" Namespace="calico-system" Pod="csi-node-driver-qm58g" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-csi--node--driver--qm58g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--16ec4aa50e-k8s-csi--node--driver--qm58g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e8d9b3f7-0f5c-46b4-bf2e-d2353a721e27", ResourceVersion:"700", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-16ec4aa50e", ContainerID:"", Pod:"csi-node-driver-qm58g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.87.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali25a8b07bd8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:40:39.984825 containerd[1881]: 2025-07-15 04:40:39.965 [INFO][4730] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.2/32] ContainerID="329161e1a4116863c36ebe1c1bdf645edeb30bc1844912c4cabc12b306a558c8" Namespace="calico-system" Pod="csi-node-driver-qm58g" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-csi--node--driver--qm58g-eth0" Jul 15 04:40:39.984825 containerd[1881]: 2025-07-15 04:40:39.965 [INFO][4730] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali25a8b07bd8a ContainerID="329161e1a4116863c36ebe1c1bdf645edeb30bc1844912c4cabc12b306a558c8" Namespace="calico-system" Pod="csi-node-driver-qm58g" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-csi--node--driver--qm58g-eth0" Jul 15 04:40:39.984825 containerd[1881]: 2025-07-15 04:40:39.970 [INFO][4730] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="329161e1a4116863c36ebe1c1bdf645edeb30bc1844912c4cabc12b306a558c8" Namespace="calico-system" Pod="csi-node-driver-qm58g" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-csi--node--driver--qm58g-eth0" Jul 15 04:40:39.984895 containerd[1881]: 2025-07-15 04:40:39.970 [INFO][4730] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="329161e1a4116863c36ebe1c1bdf645edeb30bc1844912c4cabc12b306a558c8" Namespace="calico-system" Pod="csi-node-driver-qm58g" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-csi--node--driver--qm58g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--16ec4aa50e-k8s-csi--node--driver--qm58g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e8d9b3f7-0f5c-46b4-bf2e-d2353a721e27", ResourceVersion:"700", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-16ec4aa50e", ContainerID:"329161e1a4116863c36ebe1c1bdf645edeb30bc1844912c4cabc12b306a558c8", Pod:"csi-node-driver-qm58g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.87.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali25a8b07bd8a", MAC:"76:8b:04:07:80:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:40:39.984954 containerd[1881]: 2025-07-15 04:40:39.982 [INFO][4730] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="329161e1a4116863c36ebe1c1bdf645edeb30bc1844912c4cabc12b306a558c8" Namespace="calico-system" Pod="csi-node-driver-qm58g" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-csi--node--driver--qm58g-eth0" Jul 15 04:40:40.046085 containerd[1881]: time="2025-07-15T04:40:40.046049651Z" level=info msg="connecting to shim 329161e1a4116863c36ebe1c1bdf645edeb30bc1844912c4cabc12b306a558c8" address="unix:///run/containerd/s/04b5ebac4d31e07dea94508dcd98ced6fac86ac4435baf9b4cf9591eaf706a6a" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:40:40.066871 systemd[1]: Started cri-containerd-329161e1a4116863c36ebe1c1bdf645edeb30bc1844912c4cabc12b306a558c8.scope - libcontainer container 329161e1a4116863c36ebe1c1bdf645edeb30bc1844912c4cabc12b306a558c8. Jul 15 04:40:40.095356 containerd[1881]: time="2025-07-15T04:40:40.095322402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qm58g,Uid:e8d9b3f7-0f5c-46b4-bf2e-d2353a721e27,Namespace:calico-system,Attempt:0,} returns sandbox id \"329161e1a4116863c36ebe1c1bdf645edeb30bc1844912c4cabc12b306a558c8\"" Jul 15 04:40:40.872061 containerd[1881]: time="2025-07-15T04:40:40.871941708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-664ff4b8b7-qj8jc,Uid:0a359b13-17a4-4c68-8017-804e0476456e,Namespace:calico-apiserver,Attempt:0,}" Jul 15 04:40:40.872794 containerd[1881]: time="2025-07-15T04:40:40.872556359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jjvzx,Uid:75c359f9-ab12-4f43-8af1-e72044fb1241,Namespace:kube-system,Attempt:0,}" Jul 15 04:40:40.872992 containerd[1881]: time="2025-07-15T04:40:40.872774590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79d7f47d74-kwq9v,Uid:beb9ae68-ee75-43e3-9ce6-011d474fdc7f,Namespace:calico-system,Attempt:0,}" Jul 15 04:40:41.019791 systemd-networkd[1618]: calibf8c4597e86: Link UP Jul 15 04:40:41.019981 systemd-networkd[1618]: calibf8c4597e86: Gained carrier Jul 15 04:40:41.034278 containerd[1881]: 2025-07-15 04:40:40.916 [INFO][4822] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 04:40:41.034278 containerd[1881]: 2025-07-15 04:40:40.930 [INFO][4822] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4396.0.0--n--16ec4aa50e-k8s-calico--apiserver--664ff4b8b7--qj8jc-eth0 calico-apiserver-664ff4b8b7- calico-apiserver 0a359b13-17a4-4c68-8017-804e0476456e 812 0 2025-07-15 04:40:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:664ff4b8b7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4396.0.0-n-16ec4aa50e calico-apiserver-664ff4b8b7-qj8jc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibf8c4597e86 [] [] }} ContainerID="12dd9623be3266c91f89fb6fd67a3e538b699ac66d7333e3fc20cf9a5686d5c2" Namespace="calico-apiserver" Pod="calico-apiserver-664ff4b8b7-qj8jc" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-calico--apiserver--664ff4b8b7--qj8jc-" Jul 15 04:40:41.034278 containerd[1881]: 2025-07-15 04:40:40.930 [INFO][4822] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="12dd9623be3266c91f89fb6fd67a3e538b699ac66d7333e3fc20cf9a5686d5c2" Namespace="calico-apiserver" Pod="calico-apiserver-664ff4b8b7-qj8jc" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-calico--apiserver--664ff4b8b7--qj8jc-eth0" Jul 15 04:40:41.034278 containerd[1881]: 2025-07-15 04:40:40.978 [INFO][4856] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="12dd9623be3266c91f89fb6fd67a3e538b699ac66d7333e3fc20cf9a5686d5c2" HandleID="k8s-pod-network.12dd9623be3266c91f89fb6fd67a3e538b699ac66d7333e3fc20cf9a5686d5c2" Workload="ci--4396.0.0--n--16ec4aa50e-k8s-calico--apiserver--664ff4b8b7--qj8jc-eth0" Jul 15 04:40:41.034460 containerd[1881]: 2025-07-15 04:40:40.978 [INFO][4856] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="12dd9623be3266c91f89fb6fd67a3e538b699ac66d7333e3fc20cf9a5686d5c2" HandleID="k8s-pod-network.12dd9623be3266c91f89fb6fd67a3e538b699ac66d7333e3fc20cf9a5686d5c2" Workload="ci--4396.0.0--n--16ec4aa50e-k8s-calico--apiserver--664ff4b8b7--qj8jc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b210), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4396.0.0-n-16ec4aa50e", "pod":"calico-apiserver-664ff4b8b7-qj8jc", "timestamp":"2025-07-15 04:40:40.978156375 +0000 UTC"}, Hostname:"ci-4396.0.0-n-16ec4aa50e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:40:41.034460 containerd[1881]: 2025-07-15 04:40:40.978 [INFO][4856] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:40:41.034460 containerd[1881]: 2025-07-15 04:40:40.978 [INFO][4856] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:40:41.034460 containerd[1881]: 2025-07-15 04:40:40.978 [INFO][4856] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4396.0.0-n-16ec4aa50e' Jul 15 04:40:41.034460 containerd[1881]: 2025-07-15 04:40:40.987 [INFO][4856] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.12dd9623be3266c91f89fb6fd67a3e538b699ac66d7333e3fc20cf9a5686d5c2" host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:41.034460 containerd[1881]: 2025-07-15 04:40:40.991 [INFO][4856] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:41.034460 containerd[1881]: 2025-07-15 04:40:40.995 [INFO][4856] ipam/ipam.go 511: Trying affinity for 192.168.87.0/26 host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:41.034460 containerd[1881]: 2025-07-15 04:40:40.996 [INFO][4856] ipam/ipam.go 158: Attempting to load block cidr=192.168.87.0/26 host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:41.034460 containerd[1881]: 2025-07-15 04:40:40.998 [INFO][4856] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.87.0/26 host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:41.034598 containerd[1881]: 2025-07-15 04:40:40.998 [INFO][4856] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.87.0/26 handle="k8s-pod-network.12dd9623be3266c91f89fb6fd67a3e538b699ac66d7333e3fc20cf9a5686d5c2" host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:41.034598 containerd[1881]: 2025-07-15 04:40:41.000 [INFO][4856] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.12dd9623be3266c91f89fb6fd67a3e538b699ac66d7333e3fc20cf9a5686d5c2 Jul 15 04:40:41.034598 containerd[1881]: 2025-07-15 04:40:41.003 [INFO][4856] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.87.0/26 handle="k8s-pod-network.12dd9623be3266c91f89fb6fd67a3e538b699ac66d7333e3fc20cf9a5686d5c2" host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:41.034598 containerd[1881]: 2025-07-15 04:40:41.013 [INFO][4856] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.87.3/26] block=192.168.87.0/26 handle="k8s-pod-network.12dd9623be3266c91f89fb6fd67a3e538b699ac66d7333e3fc20cf9a5686d5c2" host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:41.034598 containerd[1881]: 2025-07-15 04:40:41.013 [INFO][4856] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.87.3/26] handle="k8s-pod-network.12dd9623be3266c91f89fb6fd67a3e538b699ac66d7333e3fc20cf9a5686d5c2" host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:41.034598 containerd[1881]: 2025-07-15 04:40:41.013 [INFO][4856] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:40:41.034598 containerd[1881]: 2025-07-15 04:40:41.013 [INFO][4856] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.3/26] IPv6=[] ContainerID="12dd9623be3266c91f89fb6fd67a3e538b699ac66d7333e3fc20cf9a5686d5c2" HandleID="k8s-pod-network.12dd9623be3266c91f89fb6fd67a3e538b699ac66d7333e3fc20cf9a5686d5c2" Workload="ci--4396.0.0--n--16ec4aa50e-k8s-calico--apiserver--664ff4b8b7--qj8jc-eth0" Jul 15 04:40:41.034773 containerd[1881]: 2025-07-15 04:40:41.015 [INFO][4822] cni-plugin/k8s.go 418: Populated endpoint ContainerID="12dd9623be3266c91f89fb6fd67a3e538b699ac66d7333e3fc20cf9a5686d5c2" Namespace="calico-apiserver" Pod="calico-apiserver-664ff4b8b7-qj8jc" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-calico--apiserver--664ff4b8b7--qj8jc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--16ec4aa50e-k8s-calico--apiserver--664ff4b8b7--qj8jc-eth0", GenerateName:"calico-apiserver-664ff4b8b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"0a359b13-17a4-4c68-8017-804e0476456e", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"664ff4b8b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-16ec4aa50e", ContainerID:"", Pod:"calico-apiserver-664ff4b8b7-qj8jc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibf8c4597e86", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:40:41.034819 containerd[1881]: 2025-07-15 04:40:41.015 [INFO][4822] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.3/32] ContainerID="12dd9623be3266c91f89fb6fd67a3e538b699ac66d7333e3fc20cf9a5686d5c2" Namespace="calico-apiserver" Pod="calico-apiserver-664ff4b8b7-qj8jc" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-calico--apiserver--664ff4b8b7--qj8jc-eth0" Jul 15 04:40:41.034819 containerd[1881]: 2025-07-15 04:40:41.015 [INFO][4822] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibf8c4597e86 ContainerID="12dd9623be3266c91f89fb6fd67a3e538b699ac66d7333e3fc20cf9a5686d5c2" Namespace="calico-apiserver" Pod="calico-apiserver-664ff4b8b7-qj8jc" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-calico--apiserver--664ff4b8b7--qj8jc-eth0" Jul 15 04:40:41.034819 containerd[1881]: 2025-07-15 04:40:41.017 [INFO][4822] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="12dd9623be3266c91f89fb6fd67a3e538b699ac66d7333e3fc20cf9a5686d5c2" Namespace="calico-apiserver" Pod="calico-apiserver-664ff4b8b7-qj8jc" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-calico--apiserver--664ff4b8b7--qj8jc-eth0" Jul 15 04:40:41.034861 containerd[1881]: 2025-07-15 04:40:41.017 [INFO][4822] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="12dd9623be3266c91f89fb6fd67a3e538b699ac66d7333e3fc20cf9a5686d5c2" Namespace="calico-apiserver" Pod="calico-apiserver-664ff4b8b7-qj8jc" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-calico--apiserver--664ff4b8b7--qj8jc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--16ec4aa50e-k8s-calico--apiserver--664ff4b8b7--qj8jc-eth0", GenerateName:"calico-apiserver-664ff4b8b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"0a359b13-17a4-4c68-8017-804e0476456e", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"664ff4b8b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-16ec4aa50e", ContainerID:"12dd9623be3266c91f89fb6fd67a3e538b699ac66d7333e3fc20cf9a5686d5c2", Pod:"calico-apiserver-664ff4b8b7-qj8jc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibf8c4597e86", MAC:"ae:b5:d0:a6:6f:99", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:40:41.034895 containerd[1881]: 2025-07-15 04:40:41.031 [INFO][4822] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="12dd9623be3266c91f89fb6fd67a3e538b699ac66d7333e3fc20cf9a5686d5c2" Namespace="calico-apiserver" Pod="calico-apiserver-664ff4b8b7-qj8jc" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-calico--apiserver--664ff4b8b7--qj8jc-eth0" Jul 15 04:40:41.116593 systemd-networkd[1618]: calic75895c3e3e: Link UP Jul 15 04:40:41.117060 systemd-networkd[1618]: calic75895c3e3e: Gained carrier Jul 15 04:40:41.131839 containerd[1881]: 2025-07-15 04:40:40.934 [INFO][4832] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 04:40:41.131839 containerd[1881]: 2025-07-15 04:40:40.947 [INFO][4832] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4396.0.0--n--16ec4aa50e-k8s-coredns--674b8bbfcf--jjvzx-eth0 coredns-674b8bbfcf- kube-system 75c359f9-ab12-4f43-8af1-e72044fb1241 813 0 2025-07-15 04:40:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4396.0.0-n-16ec4aa50e coredns-674b8bbfcf-jjvzx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic75895c3e3e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="da98652134ea07880060c0683e68a9ba88f6583ce13600f59bca560ffe585681" Namespace="kube-system" Pod="coredns-674b8bbfcf-jjvzx" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-coredns--674b8bbfcf--jjvzx-" Jul 15 04:40:41.131839 containerd[1881]: 2025-07-15 04:40:40.947 [INFO][4832] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="da98652134ea07880060c0683e68a9ba88f6583ce13600f59bca560ffe585681" Namespace="kube-system" Pod="coredns-674b8bbfcf-jjvzx" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-coredns--674b8bbfcf--jjvzx-eth0" Jul 15 04:40:41.131839 containerd[1881]: 2025-07-15 04:40:40.983 [INFO][4863] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="da98652134ea07880060c0683e68a9ba88f6583ce13600f59bca560ffe585681" HandleID="k8s-pod-network.da98652134ea07880060c0683e68a9ba88f6583ce13600f59bca560ffe585681" Workload="ci--4396.0.0--n--16ec4aa50e-k8s-coredns--674b8bbfcf--jjvzx-eth0" Jul 15 04:40:41.132139 containerd[1881]: 2025-07-15 04:40:40.983 [INFO][4863] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="da98652134ea07880060c0683e68a9ba88f6583ce13600f59bca560ffe585681" HandleID="k8s-pod-network.da98652134ea07880060c0683e68a9ba88f6583ce13600f59bca560ffe585681" Workload="ci--4396.0.0--n--16ec4aa50e-k8s-coredns--674b8bbfcf--jjvzx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003312c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4396.0.0-n-16ec4aa50e", "pod":"coredns-674b8bbfcf-jjvzx", "timestamp":"2025-07-15 04:40:40.98382391 +0000 UTC"}, Hostname:"ci-4396.0.0-n-16ec4aa50e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:40:41.132139 containerd[1881]: 2025-07-15 04:40:40.984 [INFO][4863] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:40:41.132139 containerd[1881]: 2025-07-15 04:40:41.013 [INFO][4863] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:40:41.132139 containerd[1881]: 2025-07-15 04:40:41.013 [INFO][4863] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4396.0.0-n-16ec4aa50e' Jul 15 04:40:41.132139 containerd[1881]: 2025-07-15 04:40:41.088 [INFO][4863] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.da98652134ea07880060c0683e68a9ba88f6583ce13600f59bca560ffe585681" host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:41.132139 containerd[1881]: 2025-07-15 04:40:41.092 [INFO][4863] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:41.132139 containerd[1881]: 2025-07-15 04:40:41.096 [INFO][4863] ipam/ipam.go 511: Trying affinity for 192.168.87.0/26 host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:41.132139 containerd[1881]: 2025-07-15 04:40:41.097 [INFO][4863] ipam/ipam.go 158: Attempting to load block cidr=192.168.87.0/26 host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:41.132139 containerd[1881]: 2025-07-15 04:40:41.099 [INFO][4863] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.87.0/26 host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:41.132276 containerd[1881]: 2025-07-15 04:40:41.099 [INFO][4863] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.87.0/26 handle="k8s-pod-network.da98652134ea07880060c0683e68a9ba88f6583ce13600f59bca560ffe585681" host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:41.132276 containerd[1881]: 2025-07-15 04:40:41.100 [INFO][4863] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.da98652134ea07880060c0683e68a9ba88f6583ce13600f59bca560ffe585681 Jul 15 04:40:41.132276 containerd[1881]: 2025-07-15 04:40:41.103 [INFO][4863] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.87.0/26 handle="k8s-pod-network.da98652134ea07880060c0683e68a9ba88f6583ce13600f59bca560ffe585681" host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:41.132276 containerd[1881]: 2025-07-15 04:40:41.112 [INFO][4863] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.87.4/26] block=192.168.87.0/26 handle="k8s-pod-network.da98652134ea07880060c0683e68a9ba88f6583ce13600f59bca560ffe585681" host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:41.132276 containerd[1881]: 2025-07-15 04:40:41.112 [INFO][4863] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.87.4/26] handle="k8s-pod-network.da98652134ea07880060c0683e68a9ba88f6583ce13600f59bca560ffe585681" host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:41.132276 containerd[1881]: 2025-07-15 04:40:41.112 [INFO][4863] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:40:41.132276 containerd[1881]: 2025-07-15 04:40:41.112 [INFO][4863] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.4/26] IPv6=[] ContainerID="da98652134ea07880060c0683e68a9ba88f6583ce13600f59bca560ffe585681" HandleID="k8s-pod-network.da98652134ea07880060c0683e68a9ba88f6583ce13600f59bca560ffe585681" Workload="ci--4396.0.0--n--16ec4aa50e-k8s-coredns--674b8bbfcf--jjvzx-eth0" Jul 15 04:40:41.132372 containerd[1881]: 2025-07-15 04:40:41.114 [INFO][4832] cni-plugin/k8s.go 418: Populated endpoint ContainerID="da98652134ea07880060c0683e68a9ba88f6583ce13600f59bca560ffe585681" Namespace="kube-system" Pod="coredns-674b8bbfcf-jjvzx" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-coredns--674b8bbfcf--jjvzx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--16ec4aa50e-k8s-coredns--674b8bbfcf--jjvzx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"75c359f9-ab12-4f43-8af1-e72044fb1241", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-16ec4aa50e", ContainerID:"", Pod:"coredns-674b8bbfcf-jjvzx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic75895c3e3e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:40:41.132372 containerd[1881]: 2025-07-15 04:40:41.114 [INFO][4832] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.4/32] ContainerID="da98652134ea07880060c0683e68a9ba88f6583ce13600f59bca560ffe585681" Namespace="kube-system" Pod="coredns-674b8bbfcf-jjvzx" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-coredns--674b8bbfcf--jjvzx-eth0" Jul 15 04:40:41.132372 containerd[1881]: 2025-07-15 04:40:41.114 [INFO][4832] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic75895c3e3e ContainerID="da98652134ea07880060c0683e68a9ba88f6583ce13600f59bca560ffe585681" Namespace="kube-system" Pod="coredns-674b8bbfcf-jjvzx" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-coredns--674b8bbfcf--jjvzx-eth0" Jul 15 04:40:41.132372 containerd[1881]: 2025-07-15 04:40:41.117 [INFO][4832] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="da98652134ea07880060c0683e68a9ba88f6583ce13600f59bca560ffe585681" Namespace="kube-system" Pod="coredns-674b8bbfcf-jjvzx" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-coredns--674b8bbfcf--jjvzx-eth0" Jul 15 04:40:41.132372 containerd[1881]: 2025-07-15 04:40:41.118 [INFO][4832] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="da98652134ea07880060c0683e68a9ba88f6583ce13600f59bca560ffe585681" Namespace="kube-system" Pod="coredns-674b8bbfcf-jjvzx" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-coredns--674b8bbfcf--jjvzx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--16ec4aa50e-k8s-coredns--674b8bbfcf--jjvzx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"75c359f9-ab12-4f43-8af1-e72044fb1241", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-16ec4aa50e", ContainerID:"da98652134ea07880060c0683e68a9ba88f6583ce13600f59bca560ffe585681", Pod:"coredns-674b8bbfcf-jjvzx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic75895c3e3e", MAC:"ea:d4:ea:41:e4:9c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:40:41.132372 containerd[1881]: 2025-07-15 04:40:41.129 [INFO][4832] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="da98652134ea07880060c0683e68a9ba88f6583ce13600f59bca560ffe585681" Namespace="kube-system" Pod="coredns-674b8bbfcf-jjvzx" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-coredns--674b8bbfcf--jjvzx-eth0" Jul 15 04:40:41.140787 systemd-networkd[1618]: cali25a8b07bd8a: Gained IPv6LL Jul 15 04:40:41.230350 systemd-networkd[1618]: cali6a4b2620ed5: Link UP Jul 15 04:40:41.231557 systemd-networkd[1618]: cali6a4b2620ed5: Gained carrier Jul 15 04:40:41.238986 containerd[1881]: time="2025-07-15T04:40:41.238927335Z" level=info msg="connecting to shim 12dd9623be3266c91f89fb6fd67a3e538b699ac66d7333e3fc20cf9a5686d5c2" address="unix:///run/containerd/s/14951364467c92d1b664a2b1afb3fe6dcfc39d352e4a0f17c7f04c964a76fd29" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:40:41.254142 containerd[1881]: 2025-07-15 04:40:40.955 [INFO][4843] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 04:40:41.254142 containerd[1881]: 2025-07-15 04:40:40.964 [INFO][4843] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4396.0.0--n--16ec4aa50e-k8s-calico--kube--controllers--79d7f47d74--kwq9v-eth0 calico-kube-controllers-79d7f47d74- calico-system beb9ae68-ee75-43e3-9ce6-011d474fdc7f 811 0 2025-07-15 04:40:17 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:79d7f47d74 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4396.0.0-n-16ec4aa50e calico-kube-controllers-79d7f47d74-kwq9v eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6a4b2620ed5 [] [] }} ContainerID="dd746d05f7b3205cbd3be2904cede729bffe9d1456c97334d94769c9dd59a7c1" Namespace="calico-system" Pod="calico-kube-controllers-79d7f47d74-kwq9v" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-calico--kube--controllers--79d7f47d74--kwq9v-" Jul 15 04:40:41.254142 containerd[1881]: 2025-07-15 04:40:40.964 [INFO][4843] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dd746d05f7b3205cbd3be2904cede729bffe9d1456c97334d94769c9dd59a7c1" Namespace="calico-system" Pod="calico-kube-controllers-79d7f47d74-kwq9v" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-calico--kube--controllers--79d7f47d74--kwq9v-eth0" Jul 15 04:40:41.254142 containerd[1881]: 2025-07-15 04:40:40.997 [INFO][4871] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dd746d05f7b3205cbd3be2904cede729bffe9d1456c97334d94769c9dd59a7c1" HandleID="k8s-pod-network.dd746d05f7b3205cbd3be2904cede729bffe9d1456c97334d94769c9dd59a7c1" Workload="ci--4396.0.0--n--16ec4aa50e-k8s-calico--kube--controllers--79d7f47d74--kwq9v-eth0" Jul 15 04:40:41.254142 containerd[1881]: 2025-07-15 04:40:40.998 [INFO][4871] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dd746d05f7b3205cbd3be2904cede729bffe9d1456c97334d94769c9dd59a7c1" HandleID="k8s-pod-network.dd746d05f7b3205cbd3be2904cede729bffe9d1456c97334d94769c9dd59a7c1" Workload="ci--4396.0.0--n--16ec4aa50e-k8s-calico--kube--controllers--79d7f47d74--kwq9v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cf000), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4396.0.0-n-16ec4aa50e", "pod":"calico-kube-controllers-79d7f47d74-kwq9v", "timestamp":"2025-07-15 04:40:40.997870831 +0000 UTC"}, Hostname:"ci-4396.0.0-n-16ec4aa50e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:40:41.254142 containerd[1881]: 2025-07-15 04:40:40.998 [INFO][4871] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:40:41.254142 containerd[1881]: 2025-07-15 04:40:41.112 [INFO][4871] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:40:41.254142 containerd[1881]: 2025-07-15 04:40:41.112 [INFO][4871] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4396.0.0-n-16ec4aa50e' Jul 15 04:40:41.254142 containerd[1881]: 2025-07-15 04:40:41.196 [INFO][4871] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dd746d05f7b3205cbd3be2904cede729bffe9d1456c97334d94769c9dd59a7c1" host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:41.254142 containerd[1881]: 2025-07-15 04:40:41.200 [INFO][4871] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:41.254142 containerd[1881]: 2025-07-15 04:40:41.203 [INFO][4871] ipam/ipam.go 511: Trying affinity for 192.168.87.0/26 host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:41.254142 containerd[1881]: 2025-07-15 04:40:41.205 [INFO][4871] ipam/ipam.go 158: Attempting to load block cidr=192.168.87.0/26 host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:41.254142 containerd[1881]: 2025-07-15 04:40:41.206 [INFO][4871] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.87.0/26 host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:41.254142 containerd[1881]: 2025-07-15 04:40:41.206 [INFO][4871] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.87.0/26 handle="k8s-pod-network.dd746d05f7b3205cbd3be2904cede729bffe9d1456c97334d94769c9dd59a7c1" host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:41.254142 containerd[1881]: 2025-07-15 04:40:41.207 [INFO][4871] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dd746d05f7b3205cbd3be2904cede729bffe9d1456c97334d94769c9dd59a7c1 Jul 15 04:40:41.254142 containerd[1881]: 2025-07-15 04:40:41.212 [INFO][4871] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.87.0/26 handle="k8s-pod-network.dd746d05f7b3205cbd3be2904cede729bffe9d1456c97334d94769c9dd59a7c1" host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:41.254142 containerd[1881]: 2025-07-15 04:40:41.224 [INFO][4871] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.87.5/26] block=192.168.87.0/26 handle="k8s-pod-network.dd746d05f7b3205cbd3be2904cede729bffe9d1456c97334d94769c9dd59a7c1" host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:41.254142 containerd[1881]: 2025-07-15 04:40:41.224 [INFO][4871] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.87.5/26] handle="k8s-pod-network.dd746d05f7b3205cbd3be2904cede729bffe9d1456c97334d94769c9dd59a7c1" host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:41.254142 containerd[1881]: 2025-07-15 04:40:41.224 [INFO][4871] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:40:41.254142 containerd[1881]: 2025-07-15 04:40:41.224 [INFO][4871] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.5/26] IPv6=[] ContainerID="dd746d05f7b3205cbd3be2904cede729bffe9d1456c97334d94769c9dd59a7c1" HandleID="k8s-pod-network.dd746d05f7b3205cbd3be2904cede729bffe9d1456c97334d94769c9dd59a7c1" Workload="ci--4396.0.0--n--16ec4aa50e-k8s-calico--kube--controllers--79d7f47d74--kwq9v-eth0" Jul 15 04:40:41.256151 containerd[1881]: 2025-07-15 04:40:41.227 [INFO][4843] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dd746d05f7b3205cbd3be2904cede729bffe9d1456c97334d94769c9dd59a7c1" Namespace="calico-system" Pod="calico-kube-controllers-79d7f47d74-kwq9v" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-calico--kube--controllers--79d7f47d74--kwq9v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--16ec4aa50e-k8s-calico--kube--controllers--79d7f47d74--kwq9v-eth0", GenerateName:"calico-kube-controllers-79d7f47d74-", Namespace:"calico-system", SelfLink:"", UID:"beb9ae68-ee75-43e3-9ce6-011d474fdc7f", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79d7f47d74", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-16ec4aa50e", ContainerID:"", Pod:"calico-kube-controllers-79d7f47d74-kwq9v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.87.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6a4b2620ed5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:40:41.256151 containerd[1881]: 2025-07-15 04:40:41.227 [INFO][4843] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.5/32] ContainerID="dd746d05f7b3205cbd3be2904cede729bffe9d1456c97334d94769c9dd59a7c1" Namespace="calico-system" Pod="calico-kube-controllers-79d7f47d74-kwq9v" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-calico--kube--controllers--79d7f47d74--kwq9v-eth0" Jul 15 04:40:41.256151 containerd[1881]: 2025-07-15 04:40:41.228 [INFO][4843] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6a4b2620ed5 ContainerID="dd746d05f7b3205cbd3be2904cede729bffe9d1456c97334d94769c9dd59a7c1" Namespace="calico-system" Pod="calico-kube-controllers-79d7f47d74-kwq9v" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-calico--kube--controllers--79d7f47d74--kwq9v-eth0" Jul 15 04:40:41.256151 containerd[1881]: 2025-07-15 04:40:41.232 [INFO][4843] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dd746d05f7b3205cbd3be2904cede729bffe9d1456c97334d94769c9dd59a7c1" Namespace="calico-system" Pod="calico-kube-controllers-79d7f47d74-kwq9v" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-calico--kube--controllers--79d7f47d74--kwq9v-eth0" Jul 15 04:40:41.256151 containerd[1881]: 2025-07-15 04:40:41.232 [INFO][4843] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dd746d05f7b3205cbd3be2904cede729bffe9d1456c97334d94769c9dd59a7c1" Namespace="calico-system" Pod="calico-kube-controllers-79d7f47d74-kwq9v" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-calico--kube--controllers--79d7f47d74--kwq9v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--16ec4aa50e-k8s-calico--kube--controllers--79d7f47d74--kwq9v-eth0", GenerateName:"calico-kube-controllers-79d7f47d74-", Namespace:"calico-system", SelfLink:"", UID:"beb9ae68-ee75-43e3-9ce6-011d474fdc7f", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79d7f47d74", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-16ec4aa50e", ContainerID:"dd746d05f7b3205cbd3be2904cede729bffe9d1456c97334d94769c9dd59a7c1", Pod:"calico-kube-controllers-79d7f47d74-kwq9v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.87.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6a4b2620ed5", MAC:"ea:74:25:24:52:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:40:41.256151 containerd[1881]: 2025-07-15 04:40:41.248 [INFO][4843] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dd746d05f7b3205cbd3be2904cede729bffe9d1456c97334d94769c9dd59a7c1" Namespace="calico-system" Pod="calico-kube-controllers-79d7f47d74-kwq9v" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-calico--kube--controllers--79d7f47d74--kwq9v-eth0" Jul 15 04:40:41.270220 containerd[1881]: time="2025-07-15T04:40:41.270005717Z" level=info msg="connecting to shim da98652134ea07880060c0683e68a9ba88f6583ce13600f59bca560ffe585681" address="unix:///run/containerd/s/745a43a37dbcc64f9e100f7a0c2cf9f332df6c3df801142e865b27bbcf618d22" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:40:41.272855 systemd[1]: Started cri-containerd-12dd9623be3266c91f89fb6fd67a3e538b699ac66d7333e3fc20cf9a5686d5c2.scope - libcontainer container 12dd9623be3266c91f89fb6fd67a3e538b699ac66d7333e3fc20cf9a5686d5c2. Jul 15 04:40:41.307006 systemd[1]: Started cri-containerd-da98652134ea07880060c0683e68a9ba88f6583ce13600f59bca560ffe585681.scope - libcontainer container da98652134ea07880060c0683e68a9ba88f6583ce13600f59bca560ffe585681. Jul 15 04:40:41.328826 containerd[1881]: time="2025-07-15T04:40:41.328797674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-664ff4b8b7-qj8jc,Uid:0a359b13-17a4-4c68-8017-804e0476456e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"12dd9623be3266c91f89fb6fd67a3e538b699ac66d7333e3fc20cf9a5686d5c2\"" Jul 15 04:40:41.339075 containerd[1881]: time="2025-07-15T04:40:41.339039614Z" level=info msg="connecting to shim dd746d05f7b3205cbd3be2904cede729bffe9d1456c97334d94769c9dd59a7c1" address="unix:///run/containerd/s/348b2c088c4d8eeb0a24d87bc2b2b40b028560b1b373dc1195a07b12466628a5" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:40:41.367155 containerd[1881]: time="2025-07-15T04:40:41.367032197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jjvzx,Uid:75c359f9-ab12-4f43-8af1-e72044fb1241,Namespace:kube-system,Attempt:0,} returns sandbox id \"da98652134ea07880060c0683e68a9ba88f6583ce13600f59bca560ffe585681\"" Jul 15 04:40:41.379642 containerd[1881]: time="2025-07-15T04:40:41.379609977Z" level=info msg="CreateContainer within sandbox \"da98652134ea07880060c0683e68a9ba88f6583ce13600f59bca560ffe585681\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 04:40:41.380012 systemd[1]: Started cri-containerd-dd746d05f7b3205cbd3be2904cede729bffe9d1456c97334d94769c9dd59a7c1.scope - libcontainer container dd746d05f7b3205cbd3be2904cede729bffe9d1456c97334d94769c9dd59a7c1. Jul 15 04:40:41.427943 containerd[1881]: time="2025-07-15T04:40:41.427855224Z" level=info msg="Container 029551b36ad80579993392473626cb6a924fc67df9b92e7e1f28b41ad89d3a2a: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:40:41.446125 containerd[1881]: time="2025-07-15T04:40:41.446089979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79d7f47d74-kwq9v,Uid:beb9ae68-ee75-43e3-9ce6-011d474fdc7f,Namespace:calico-system,Attempt:0,} returns sandbox id \"dd746d05f7b3205cbd3be2904cede729bffe9d1456c97334d94769c9dd59a7c1\"" Jul 15 04:40:41.455439 containerd[1881]: time="2025-07-15T04:40:41.455356072Z" level=info msg="CreateContainer within sandbox \"da98652134ea07880060c0683e68a9ba88f6583ce13600f59bca560ffe585681\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"029551b36ad80579993392473626cb6a924fc67df9b92e7e1f28b41ad89d3a2a\"" Jul 15 04:40:41.456741 containerd[1881]: time="2025-07-15T04:40:41.456053742Z" level=info msg="StartContainer for \"029551b36ad80579993392473626cb6a924fc67df9b92e7e1f28b41ad89d3a2a\"" Jul 15 04:40:41.457997 containerd[1881]: time="2025-07-15T04:40:41.457966177Z" level=info msg="connecting to shim 029551b36ad80579993392473626cb6a924fc67df9b92e7e1f28b41ad89d3a2a" address="unix:///run/containerd/s/745a43a37dbcc64f9e100f7a0c2cf9f332df6c3df801142e865b27bbcf618d22" protocol=ttrpc version=3 Jul 15 04:40:41.478884 systemd[1]: Started cri-containerd-029551b36ad80579993392473626cb6a924fc67df9b92e7e1f28b41ad89d3a2a.scope - libcontainer container 029551b36ad80579993392473626cb6a924fc67df9b92e7e1f28b41ad89d3a2a. Jul 15 04:40:41.520515 containerd[1881]: time="2025-07-15T04:40:41.520477512Z" level=info msg="StartContainer for \"029551b36ad80579993392473626cb6a924fc67df9b92e7e1f28b41ad89d3a2a\" returns successfully" Jul 15 04:40:41.871241 containerd[1881]: time="2025-07-15T04:40:41.871190490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-j8nkz,Uid:25ef5024-8c17-4982-9c07-f2b3a463d13a,Namespace:calico-system,Attempt:0,}" Jul 15 04:40:42.040665 kubelet[3314]: I0715 04:40:42.040317 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-jjvzx" podStartSLOduration=40.040300852 podStartE2EDuration="40.040300852s" podCreationTimestamp="2025-07-15 04:40:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:40:42.026983836 +0000 UTC m=+45.224714574" watchObservedRunningTime="2025-07-15 04:40:42.040300852 +0000 UTC m=+45.238031582" Jul 15 04:40:42.228615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3245276496.mount: Deactivated successfully. Jul 15 04:40:42.229197 systemd-networkd[1618]: calibf8c4597e86: Gained IPv6LL Jul 15 04:40:42.251799 systemd-networkd[1618]: cali4c243ea490e: Link UP Jul 15 04:40:42.252698 systemd-networkd[1618]: cali4c243ea490e: Gained carrier Jul 15 04:40:42.268564 containerd[1881]: 2025-07-15 04:40:42.190 [INFO][5101] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 04:40:42.268564 containerd[1881]: 2025-07-15 04:40:42.198 [INFO][5101] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4396.0.0--n--16ec4aa50e-k8s-goldmane--768f4c5c69--j8nkz-eth0 goldmane-768f4c5c69- calico-system 25ef5024-8c17-4982-9c07-f2b3a463d13a 816 0 2025-07-15 04:40:17 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4396.0.0-n-16ec4aa50e goldmane-768f4c5c69-j8nkz eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4c243ea490e [] [] }} ContainerID="c2688c093bdeca52994bdef78aa98c22d0de5669901491168c85355ad585a325" Namespace="calico-system" Pod="goldmane-768f4c5c69-j8nkz" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-goldmane--768f4c5c69--j8nkz-" Jul 15 04:40:42.268564 containerd[1881]: 2025-07-15 04:40:42.198 [INFO][5101] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c2688c093bdeca52994bdef78aa98c22d0de5669901491168c85355ad585a325" Namespace="calico-system" Pod="goldmane-768f4c5c69-j8nkz" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-goldmane--768f4c5c69--j8nkz-eth0" Jul 15 04:40:42.268564 containerd[1881]: 2025-07-15 04:40:42.214 [INFO][5113] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c2688c093bdeca52994bdef78aa98c22d0de5669901491168c85355ad585a325" HandleID="k8s-pod-network.c2688c093bdeca52994bdef78aa98c22d0de5669901491168c85355ad585a325" Workload="ci--4396.0.0--n--16ec4aa50e-k8s-goldmane--768f4c5c69--j8nkz-eth0" Jul 15 04:40:42.268564 containerd[1881]: 2025-07-15 04:40:42.214 [INFO][5113] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c2688c093bdeca52994bdef78aa98c22d0de5669901491168c85355ad585a325" HandleID="k8s-pod-network.c2688c093bdeca52994bdef78aa98c22d0de5669901491168c85355ad585a325" Workload="ci--4396.0.0--n--16ec4aa50e-k8s-goldmane--768f4c5c69--j8nkz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b1f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4396.0.0-n-16ec4aa50e", "pod":"goldmane-768f4c5c69-j8nkz", "timestamp":"2025-07-15 04:40:42.214782073 +0000 UTC"}, Hostname:"ci-4396.0.0-n-16ec4aa50e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:40:42.268564 containerd[1881]: 2025-07-15 04:40:42.214 [INFO][5113] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:40:42.268564 containerd[1881]: 2025-07-15 04:40:42.215 [INFO][5113] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:40:42.268564 containerd[1881]: 2025-07-15 04:40:42.215 [INFO][5113] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4396.0.0-n-16ec4aa50e' Jul 15 04:40:42.268564 containerd[1881]: 2025-07-15 04:40:42.221 [INFO][5113] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c2688c093bdeca52994bdef78aa98c22d0de5669901491168c85355ad585a325" host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:42.268564 containerd[1881]: 2025-07-15 04:40:42.226 [INFO][5113] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:42.268564 containerd[1881]: 2025-07-15 04:40:42.232 [INFO][5113] ipam/ipam.go 511: Trying affinity for 192.168.87.0/26 host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:42.268564 containerd[1881]: 2025-07-15 04:40:42.233 [INFO][5113] ipam/ipam.go 158: Attempting to load block cidr=192.168.87.0/26 host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:42.268564 containerd[1881]: 2025-07-15 04:40:42.234 [INFO][5113] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.87.0/26 host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:42.268564 containerd[1881]: 2025-07-15 04:40:42.235 [INFO][5113] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.87.0/26 handle="k8s-pod-network.c2688c093bdeca52994bdef78aa98c22d0de5669901491168c85355ad585a325" host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:42.268564 containerd[1881]: 2025-07-15 04:40:42.236 [INFO][5113] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c2688c093bdeca52994bdef78aa98c22d0de5669901491168c85355ad585a325 Jul 15 04:40:42.268564 containerd[1881]: 2025-07-15 04:40:42.240 [INFO][5113] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.87.0/26 handle="k8s-pod-network.c2688c093bdeca52994bdef78aa98c22d0de5669901491168c85355ad585a325" host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:42.268564 containerd[1881]: 2025-07-15 04:40:42.247 [INFO][5113] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.87.6/26] block=192.168.87.0/26 handle="k8s-pod-network.c2688c093bdeca52994bdef78aa98c22d0de5669901491168c85355ad585a325" host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:42.268564 containerd[1881]: 2025-07-15 04:40:42.248 [INFO][5113] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.87.6/26] handle="k8s-pod-network.c2688c093bdeca52994bdef78aa98c22d0de5669901491168c85355ad585a325" host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:42.268564 containerd[1881]: 2025-07-15 04:40:42.248 [INFO][5113] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:40:42.268564 containerd[1881]: 2025-07-15 04:40:42.248 [INFO][5113] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.6/26] IPv6=[] ContainerID="c2688c093bdeca52994bdef78aa98c22d0de5669901491168c85355ad585a325" HandleID="k8s-pod-network.c2688c093bdeca52994bdef78aa98c22d0de5669901491168c85355ad585a325" Workload="ci--4396.0.0--n--16ec4aa50e-k8s-goldmane--768f4c5c69--j8nkz-eth0" Jul 15 04:40:42.269296 containerd[1881]: 2025-07-15 04:40:42.249 [INFO][5101] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c2688c093bdeca52994bdef78aa98c22d0de5669901491168c85355ad585a325" Namespace="calico-system" Pod="goldmane-768f4c5c69-j8nkz" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-goldmane--768f4c5c69--j8nkz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--16ec4aa50e-k8s-goldmane--768f4c5c69--j8nkz-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"25ef5024-8c17-4982-9c07-f2b3a463d13a", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-16ec4aa50e", ContainerID:"", Pod:"goldmane-768f4c5c69-j8nkz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.87.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4c243ea490e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:40:42.269296 containerd[1881]: 2025-07-15 04:40:42.249 [INFO][5101] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.6/32] ContainerID="c2688c093bdeca52994bdef78aa98c22d0de5669901491168c85355ad585a325" Namespace="calico-system" Pod="goldmane-768f4c5c69-j8nkz" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-goldmane--768f4c5c69--j8nkz-eth0" Jul 15 04:40:42.269296 containerd[1881]: 2025-07-15 04:40:42.249 [INFO][5101] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c243ea490e ContainerID="c2688c093bdeca52994bdef78aa98c22d0de5669901491168c85355ad585a325" Namespace="calico-system" Pod="goldmane-768f4c5c69-j8nkz" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-goldmane--768f4c5c69--j8nkz-eth0" Jul 15 04:40:42.269296 containerd[1881]: 2025-07-15 04:40:42.253 [INFO][5101] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c2688c093bdeca52994bdef78aa98c22d0de5669901491168c85355ad585a325" Namespace="calico-system" Pod="goldmane-768f4c5c69-j8nkz" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-goldmane--768f4c5c69--j8nkz-eth0" Jul 15 04:40:42.269296 containerd[1881]: 2025-07-15 04:40:42.253 [INFO][5101] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c2688c093bdeca52994bdef78aa98c22d0de5669901491168c85355ad585a325" Namespace="calico-system" Pod="goldmane-768f4c5c69-j8nkz" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-goldmane--768f4c5c69--j8nkz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--16ec4aa50e-k8s-goldmane--768f4c5c69--j8nkz-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"25ef5024-8c17-4982-9c07-f2b3a463d13a", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-16ec4aa50e", ContainerID:"c2688c093bdeca52994bdef78aa98c22d0de5669901491168c85355ad585a325", Pod:"goldmane-768f4c5c69-j8nkz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.87.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4c243ea490e", MAC:"82:50:fe:8b:83:d0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:40:42.269296 containerd[1881]: 2025-07-15 04:40:42.265 [INFO][5101] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c2688c093bdeca52994bdef78aa98c22d0de5669901491168c85355ad585a325" Namespace="calico-system" Pod="goldmane-768f4c5c69-j8nkz" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-goldmane--768f4c5c69--j8nkz-eth0" Jul 15 04:40:42.356836 systemd-networkd[1618]: calic75895c3e3e: Gained IPv6LL Jul 15 04:40:42.717442 containerd[1881]: time="2025-07-15T04:40:42.717326735Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:42.728672 containerd[1881]: time="2025-07-15T04:40:42.728632391Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 15 04:40:42.736923 containerd[1881]: time="2025-07-15T04:40:42.736751073Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:42.743303 containerd[1881]: time="2025-07-15T04:40:42.743258752Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 3.597200259s" Jul 15 04:40:42.743303 containerd[1881]: time="2025-07-15T04:40:42.743304033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 15 04:40:42.744059 containerd[1881]: time="2025-07-15T04:40:42.744016992Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:42.744655 containerd[1881]: time="2025-07-15T04:40:42.744559425Z" level=info msg="connecting to shim c2688c093bdeca52994bdef78aa98c22d0de5669901491168c85355ad585a325" address="unix:///run/containerd/s/cc55e75bf1e4008df0b22b710364c6d757928c901f4cb725efc67d7b43feff3e" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:40:42.749723 containerd[1881]: time="2025-07-15T04:40:42.749671748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 15 04:40:42.761183 containerd[1881]: time="2025-07-15T04:40:42.761140464Z" level=info msg="CreateContainer within sandbox \"71b1ba84f13ca9334bed52fd338017b5c85e806128ad4669c5ce6bbf0db2d570\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 15 04:40:42.767839 systemd[1]: Started cri-containerd-c2688c093bdeca52994bdef78aa98c22d0de5669901491168c85355ad585a325.scope - libcontainer container c2688c093bdeca52994bdef78aa98c22d0de5669901491168c85355ad585a325. Jul 15 04:40:42.804931 systemd-networkd[1618]: cali6a4b2620ed5: Gained IPv6LL Jul 15 04:40:42.806726 containerd[1881]: time="2025-07-15T04:40:42.806623031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-j8nkz,Uid:25ef5024-8c17-4982-9c07-f2b3a463d13a,Namespace:calico-system,Attempt:0,} returns sandbox id \"c2688c093bdeca52994bdef78aa98c22d0de5669901491168c85355ad585a325\"" Jul 15 04:40:42.826431 containerd[1881]: time="2025-07-15T04:40:42.826316017Z" level=info msg="Container f249ac02c9b5c6a4181974916aec13d85cb04351310636a19a1ef39d708937bc: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:40:42.830951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3236866000.mount: Deactivated successfully. Jul 15 04:40:42.851444 containerd[1881]: time="2025-07-15T04:40:42.851383814Z" level=info msg="CreateContainer within sandbox \"71b1ba84f13ca9334bed52fd338017b5c85e806128ad4669c5ce6bbf0db2d570\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"f249ac02c9b5c6a4181974916aec13d85cb04351310636a19a1ef39d708937bc\"" Jul 15 04:40:42.852362 containerd[1881]: time="2025-07-15T04:40:42.852335877Z" level=info msg="StartContainer for \"f249ac02c9b5c6a4181974916aec13d85cb04351310636a19a1ef39d708937bc\"" Jul 15 04:40:42.854462 containerd[1881]: time="2025-07-15T04:40:42.854405142Z" level=info msg="connecting to shim f249ac02c9b5c6a4181974916aec13d85cb04351310636a19a1ef39d708937bc" address="unix:///run/containerd/s/290b1bf2eab0593276b8624adee17fcb9fbfab9a0b877f6c0ad6e47a05c02657" protocol=ttrpc version=3 Jul 15 04:40:42.873993 containerd[1881]: time="2025-07-15T04:40:42.873894626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-664ff4b8b7-wb4g9,Uid:27194450-2b09-43c8-89e6-f093b196899d,Namespace:calico-apiserver,Attempt:0,}" Jul 15 04:40:42.889896 systemd[1]: Started cri-containerd-f249ac02c9b5c6a4181974916aec13d85cb04351310636a19a1ef39d708937bc.scope - libcontainer container f249ac02c9b5c6a4181974916aec13d85cb04351310636a19a1ef39d708937bc. Jul 15 04:40:42.953515 containerd[1881]: time="2025-07-15T04:40:42.953459301Z" level=info msg="StartContainer for \"f249ac02c9b5c6a4181974916aec13d85cb04351310636a19a1ef39d708937bc\" returns successfully" Jul 15 04:40:42.993443 systemd-networkd[1618]: cali16418757172: Link UP Jul 15 04:40:42.994038 systemd-networkd[1618]: cali16418757172: Gained carrier Jul 15 04:40:43.008183 containerd[1881]: 2025-07-15 04:40:42.921 [INFO][5209] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 04:40:43.008183 containerd[1881]: 2025-07-15 04:40:42.933 [INFO][5209] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4396.0.0--n--16ec4aa50e-k8s-calico--apiserver--664ff4b8b7--wb4g9-eth0 calico-apiserver-664ff4b8b7- calico-apiserver 27194450-2b09-43c8-89e6-f093b196899d 814 0 2025-07-15 04:40:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:664ff4b8b7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4396.0.0-n-16ec4aa50e calico-apiserver-664ff4b8b7-wb4g9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali16418757172 [] [] }} ContainerID="1ccf258ba3d573296324c22113a71d2c831ae6b134792bba41a3c3804ff37191" Namespace="calico-apiserver" Pod="calico-apiserver-664ff4b8b7-wb4g9" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-calico--apiserver--664ff4b8b7--wb4g9-" Jul 15 04:40:43.008183 containerd[1881]: 2025-07-15 04:40:42.933 [INFO][5209] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1ccf258ba3d573296324c22113a71d2c831ae6b134792bba41a3c3804ff37191" Namespace="calico-apiserver" Pod="calico-apiserver-664ff4b8b7-wb4g9" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-calico--apiserver--664ff4b8b7--wb4g9-eth0" Jul 15 04:40:43.008183 containerd[1881]: 2025-07-15 04:40:42.960 [INFO][5234] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1ccf258ba3d573296324c22113a71d2c831ae6b134792bba41a3c3804ff37191" HandleID="k8s-pod-network.1ccf258ba3d573296324c22113a71d2c831ae6b134792bba41a3c3804ff37191" Workload="ci--4396.0.0--n--16ec4aa50e-k8s-calico--apiserver--664ff4b8b7--wb4g9-eth0" Jul 15 04:40:43.008183 containerd[1881]: 2025-07-15 04:40:42.961 [INFO][5234] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1ccf258ba3d573296324c22113a71d2c831ae6b134792bba41a3c3804ff37191" HandleID="k8s-pod-network.1ccf258ba3d573296324c22113a71d2c831ae6b134792bba41a3c3804ff37191" Workload="ci--4396.0.0--n--16ec4aa50e-k8s-calico--apiserver--664ff4b8b7--wb4g9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002baff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4396.0.0-n-16ec4aa50e", "pod":"calico-apiserver-664ff4b8b7-wb4g9", "timestamp":"2025-07-15 04:40:42.960904305 +0000 UTC"}, Hostname:"ci-4396.0.0-n-16ec4aa50e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:40:43.008183 containerd[1881]: 2025-07-15 04:40:42.961 [INFO][5234] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:40:43.008183 containerd[1881]: 2025-07-15 04:40:42.961 [INFO][5234] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:40:43.008183 containerd[1881]: 2025-07-15 04:40:42.961 [INFO][5234] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4396.0.0-n-16ec4aa50e' Jul 15 04:40:43.008183 containerd[1881]: 2025-07-15 04:40:42.965 [INFO][5234] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1ccf258ba3d573296324c22113a71d2c831ae6b134792bba41a3c3804ff37191" host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:43.008183 containerd[1881]: 2025-07-15 04:40:42.969 [INFO][5234] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:43.008183 containerd[1881]: 2025-07-15 04:40:42.972 [INFO][5234] ipam/ipam.go 511: Trying affinity for 192.168.87.0/26 host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:43.008183 containerd[1881]: 2025-07-15 04:40:42.973 [INFO][5234] ipam/ipam.go 158: Attempting to load block cidr=192.168.87.0/26 host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:43.008183 containerd[1881]: 2025-07-15 04:40:42.975 [INFO][5234] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.87.0/26 host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:43.008183 containerd[1881]: 2025-07-15 04:40:42.975 [INFO][5234] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.87.0/26 handle="k8s-pod-network.1ccf258ba3d573296324c22113a71d2c831ae6b134792bba41a3c3804ff37191" host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:43.008183 containerd[1881]: 2025-07-15 04:40:42.976 [INFO][5234] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1ccf258ba3d573296324c22113a71d2c831ae6b134792bba41a3c3804ff37191 Jul 15 04:40:43.008183 containerd[1881]: 2025-07-15 04:40:42.983 [INFO][5234] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.87.0/26 handle="k8s-pod-network.1ccf258ba3d573296324c22113a71d2c831ae6b134792bba41a3c3804ff37191" host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:43.008183 containerd[1881]: 2025-07-15 04:40:42.988 [INFO][5234] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.87.7/26] block=192.168.87.0/26 handle="k8s-pod-network.1ccf258ba3d573296324c22113a71d2c831ae6b134792bba41a3c3804ff37191" host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:43.008183 containerd[1881]: 2025-07-15 04:40:42.988 [INFO][5234] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.87.7/26] handle="k8s-pod-network.1ccf258ba3d573296324c22113a71d2c831ae6b134792bba41a3c3804ff37191" host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:43.008183 containerd[1881]: 2025-07-15 04:40:42.988 [INFO][5234] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:40:43.008183 containerd[1881]: 2025-07-15 04:40:42.988 [INFO][5234] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.7/26] IPv6=[] ContainerID="1ccf258ba3d573296324c22113a71d2c831ae6b134792bba41a3c3804ff37191" HandleID="k8s-pod-network.1ccf258ba3d573296324c22113a71d2c831ae6b134792bba41a3c3804ff37191" Workload="ci--4396.0.0--n--16ec4aa50e-k8s-calico--apiserver--664ff4b8b7--wb4g9-eth0" Jul 15 04:40:43.008610 containerd[1881]: 2025-07-15 04:40:42.990 [INFO][5209] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1ccf258ba3d573296324c22113a71d2c831ae6b134792bba41a3c3804ff37191" Namespace="calico-apiserver" Pod="calico-apiserver-664ff4b8b7-wb4g9" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-calico--apiserver--664ff4b8b7--wb4g9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--16ec4aa50e-k8s-calico--apiserver--664ff4b8b7--wb4g9-eth0", GenerateName:"calico-apiserver-664ff4b8b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"27194450-2b09-43c8-89e6-f093b196899d", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"664ff4b8b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-16ec4aa50e", ContainerID:"", Pod:"calico-apiserver-664ff4b8b7-wb4g9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali16418757172", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:40:43.008610 containerd[1881]: 2025-07-15 04:40:42.990 [INFO][5209] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.7/32] ContainerID="1ccf258ba3d573296324c22113a71d2c831ae6b134792bba41a3c3804ff37191" Namespace="calico-apiserver" Pod="calico-apiserver-664ff4b8b7-wb4g9" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-calico--apiserver--664ff4b8b7--wb4g9-eth0" Jul 15 04:40:43.008610 containerd[1881]: 2025-07-15 04:40:42.990 [INFO][5209] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali16418757172 ContainerID="1ccf258ba3d573296324c22113a71d2c831ae6b134792bba41a3c3804ff37191" Namespace="calico-apiserver" Pod="calico-apiserver-664ff4b8b7-wb4g9" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-calico--apiserver--664ff4b8b7--wb4g9-eth0" Jul 15 04:40:43.008610 containerd[1881]: 2025-07-15 04:40:42.995 [INFO][5209] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1ccf258ba3d573296324c22113a71d2c831ae6b134792bba41a3c3804ff37191" Namespace="calico-apiserver" Pod="calico-apiserver-664ff4b8b7-wb4g9" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-calico--apiserver--664ff4b8b7--wb4g9-eth0" Jul 15 04:40:43.008610 containerd[1881]: 2025-07-15 04:40:42.996 [INFO][5209] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1ccf258ba3d573296324c22113a71d2c831ae6b134792bba41a3c3804ff37191" Namespace="calico-apiserver" Pod="calico-apiserver-664ff4b8b7-wb4g9" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-calico--apiserver--664ff4b8b7--wb4g9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--16ec4aa50e-k8s-calico--apiserver--664ff4b8b7--wb4g9-eth0", GenerateName:"calico-apiserver-664ff4b8b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"27194450-2b09-43c8-89e6-f093b196899d", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"664ff4b8b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-16ec4aa50e", ContainerID:"1ccf258ba3d573296324c22113a71d2c831ae6b134792bba41a3c3804ff37191", Pod:"calico-apiserver-664ff4b8b7-wb4g9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali16418757172", MAC:"fe:d4:c6:a4:c5:a5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:40:43.008610 containerd[1881]: 2025-07-15 04:40:43.005 [INFO][5209] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1ccf258ba3d573296324c22113a71d2c831ae6b134792bba41a3c3804ff37191" Namespace="calico-apiserver" Pod="calico-apiserver-664ff4b8b7-wb4g9" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-calico--apiserver--664ff4b8b7--wb4g9-eth0" Jul 15 04:40:43.030931 kubelet[3314]: I0715 04:40:43.030835 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6c4b8cc5c-rnbht" podStartSLOduration=0.896718439 podStartE2EDuration="6.030820105s" podCreationTimestamp="2025-07-15 04:40:37 +0000 UTC" firstStartedPulling="2025-07-15 04:40:37.613049474 +0000 UTC m=+40.810780204" lastFinishedPulling="2025-07-15 04:40:42.74715114 +0000 UTC m=+45.944881870" observedRunningTime="2025-07-15 04:40:43.029885755 +0000 UTC m=+46.227616501" watchObservedRunningTime="2025-07-15 04:40:43.030820105 +0000 UTC m=+46.228550835" Jul 15 04:40:43.073169 containerd[1881]: time="2025-07-15T04:40:43.073128666Z" level=info msg="connecting to shim 1ccf258ba3d573296324c22113a71d2c831ae6b134792bba41a3c3804ff37191" address="unix:///run/containerd/s/247ca9e4ce936dbf0a6746842db6cd54aa3ed04fc5dafb44fc09a160ef08c3de" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:40:43.093840 systemd[1]: Started cri-containerd-1ccf258ba3d573296324c22113a71d2c831ae6b134792bba41a3c3804ff37191.scope - libcontainer container 1ccf258ba3d573296324c22113a71d2c831ae6b134792bba41a3c3804ff37191. Jul 15 04:40:43.123405 containerd[1881]: time="2025-07-15T04:40:43.123365296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-664ff4b8b7-wb4g9,Uid:27194450-2b09-43c8-89e6-f093b196899d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1ccf258ba3d573296324c22113a71d2c831ae6b134792bba41a3c3804ff37191\"" Jul 15 04:40:43.870954 containerd[1881]: time="2025-07-15T04:40:43.870913660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8pnv9,Uid:620101e4-1e3b-45da-9e88-0454710f6851,Namespace:kube-system,Attempt:0,}" Jul 15 04:40:44.000668 systemd-networkd[1618]: calic951bcca5b4: Link UP Jul 15 04:40:44.000864 systemd-networkd[1618]: calic951bcca5b4: Gained carrier Jul 15 04:40:44.015243 containerd[1881]: 2025-07-15 04:40:43.896 [INFO][5306] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 04:40:44.015243 containerd[1881]: 2025-07-15 04:40:43.905 [INFO][5306] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4396.0.0--n--16ec4aa50e-k8s-coredns--674b8bbfcf--8pnv9-eth0 coredns-674b8bbfcf- kube-system 620101e4-1e3b-45da-9e88-0454710f6851 808 0 2025-07-15 04:40:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4396.0.0-n-16ec4aa50e coredns-674b8bbfcf-8pnv9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic951bcca5b4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ecee0533e4a229bd76788b28931f5035edb7ece2907af41ac4d3b455e1bebac4" Namespace="kube-system" Pod="coredns-674b8bbfcf-8pnv9" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-coredns--674b8bbfcf--8pnv9-" Jul 15 04:40:44.015243 containerd[1881]: 2025-07-15 04:40:43.905 [INFO][5306] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ecee0533e4a229bd76788b28931f5035edb7ece2907af41ac4d3b455e1bebac4" Namespace="kube-system" Pod="coredns-674b8bbfcf-8pnv9" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-coredns--674b8bbfcf--8pnv9-eth0" Jul 15 04:40:44.015243 containerd[1881]: 2025-07-15 04:40:43.944 [INFO][5317] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ecee0533e4a229bd76788b28931f5035edb7ece2907af41ac4d3b455e1bebac4" HandleID="k8s-pod-network.ecee0533e4a229bd76788b28931f5035edb7ece2907af41ac4d3b455e1bebac4" Workload="ci--4396.0.0--n--16ec4aa50e-k8s-coredns--674b8bbfcf--8pnv9-eth0" Jul 15 04:40:44.015243 containerd[1881]: 2025-07-15 04:40:43.946 [INFO][5317] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ecee0533e4a229bd76788b28931f5035edb7ece2907af41ac4d3b455e1bebac4" HandleID="k8s-pod-network.ecee0533e4a229bd76788b28931f5035edb7ece2907af41ac4d3b455e1bebac4" Workload="ci--4396.0.0--n--16ec4aa50e-k8s-coredns--674b8bbfcf--8pnv9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024af60), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4396.0.0-n-16ec4aa50e", "pod":"coredns-674b8bbfcf-8pnv9", "timestamp":"2025-07-15 04:40:43.944260978 +0000 UTC"}, Hostname:"ci-4396.0.0-n-16ec4aa50e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:40:44.015243 containerd[1881]: 2025-07-15 04:40:43.948 [INFO][5317] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:40:44.015243 containerd[1881]: 2025-07-15 04:40:43.948 [INFO][5317] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:40:44.015243 containerd[1881]: 2025-07-15 04:40:43.948 [INFO][5317] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4396.0.0-n-16ec4aa50e' Jul 15 04:40:44.015243 containerd[1881]: 2025-07-15 04:40:43.955 [INFO][5317] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ecee0533e4a229bd76788b28931f5035edb7ece2907af41ac4d3b455e1bebac4" host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:44.015243 containerd[1881]: 2025-07-15 04:40:43.959 [INFO][5317] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:44.015243 containerd[1881]: 2025-07-15 04:40:43.963 [INFO][5317] ipam/ipam.go 511: Trying affinity for 192.168.87.0/26 host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:44.015243 containerd[1881]: 2025-07-15 04:40:43.965 [INFO][5317] ipam/ipam.go 158: Attempting to load block cidr=192.168.87.0/26 host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:44.015243 containerd[1881]: 2025-07-15 04:40:43.968 [INFO][5317] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.87.0/26 host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:44.015243 containerd[1881]: 2025-07-15 04:40:43.970 [INFO][5317] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.87.0/26 handle="k8s-pod-network.ecee0533e4a229bd76788b28931f5035edb7ece2907af41ac4d3b455e1bebac4" host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:44.015243 containerd[1881]: 2025-07-15 04:40:43.973 [INFO][5317] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ecee0533e4a229bd76788b28931f5035edb7ece2907af41ac4d3b455e1bebac4 Jul 15 04:40:44.015243 containerd[1881]: 2025-07-15 04:40:43.983 [INFO][5317] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.87.0/26 handle="k8s-pod-network.ecee0533e4a229bd76788b28931f5035edb7ece2907af41ac4d3b455e1bebac4" host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:44.015243 containerd[1881]: 2025-07-15 04:40:43.993 [INFO][5317] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.87.8/26] block=192.168.87.0/26 handle="k8s-pod-network.ecee0533e4a229bd76788b28931f5035edb7ece2907af41ac4d3b455e1bebac4" host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:44.015243 containerd[1881]: 2025-07-15 04:40:43.993 [INFO][5317] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.87.8/26] handle="k8s-pod-network.ecee0533e4a229bd76788b28931f5035edb7ece2907af41ac4d3b455e1bebac4" host="ci-4396.0.0-n-16ec4aa50e" Jul 15 04:40:44.015243 containerd[1881]: 2025-07-15 04:40:43.993 [INFO][5317] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:40:44.015243 containerd[1881]: 2025-07-15 04:40:43.993 [INFO][5317] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.8/26] IPv6=[] ContainerID="ecee0533e4a229bd76788b28931f5035edb7ece2907af41ac4d3b455e1bebac4" HandleID="k8s-pod-network.ecee0533e4a229bd76788b28931f5035edb7ece2907af41ac4d3b455e1bebac4" Workload="ci--4396.0.0--n--16ec4aa50e-k8s-coredns--674b8bbfcf--8pnv9-eth0" Jul 15 04:40:44.015646 containerd[1881]: 2025-07-15 04:40:43.996 [INFO][5306] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ecee0533e4a229bd76788b28931f5035edb7ece2907af41ac4d3b455e1bebac4" Namespace="kube-system" Pod="coredns-674b8bbfcf-8pnv9" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-coredns--674b8bbfcf--8pnv9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--16ec4aa50e-k8s-coredns--674b8bbfcf--8pnv9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"620101e4-1e3b-45da-9e88-0454710f6851", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-16ec4aa50e", ContainerID:"", Pod:"coredns-674b8bbfcf-8pnv9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic951bcca5b4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:40:44.015646 containerd[1881]: 2025-07-15 04:40:43.996 [INFO][5306] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.8/32] ContainerID="ecee0533e4a229bd76788b28931f5035edb7ece2907af41ac4d3b455e1bebac4" Namespace="kube-system" Pod="coredns-674b8bbfcf-8pnv9" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-coredns--674b8bbfcf--8pnv9-eth0" Jul 15 04:40:44.015646 containerd[1881]: 2025-07-15 04:40:43.996 [INFO][5306] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic951bcca5b4 ContainerID="ecee0533e4a229bd76788b28931f5035edb7ece2907af41ac4d3b455e1bebac4" Namespace="kube-system" Pod="coredns-674b8bbfcf-8pnv9" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-coredns--674b8bbfcf--8pnv9-eth0" Jul 15 04:40:44.015646 containerd[1881]: 2025-07-15 04:40:43.998 [INFO][5306] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ecee0533e4a229bd76788b28931f5035edb7ece2907af41ac4d3b455e1bebac4" Namespace="kube-system" Pod="coredns-674b8bbfcf-8pnv9" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-coredns--674b8bbfcf--8pnv9-eth0" Jul 15 04:40:44.015646 containerd[1881]: 2025-07-15 04:40:43.998 [INFO][5306] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ecee0533e4a229bd76788b28931f5035edb7ece2907af41ac4d3b455e1bebac4" Namespace="kube-system" Pod="coredns-674b8bbfcf-8pnv9" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-coredns--674b8bbfcf--8pnv9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--16ec4aa50e-k8s-coredns--674b8bbfcf--8pnv9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"620101e4-1e3b-45da-9e88-0454710f6851", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-16ec4aa50e", ContainerID:"ecee0533e4a229bd76788b28931f5035edb7ece2907af41ac4d3b455e1bebac4", Pod:"coredns-674b8bbfcf-8pnv9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic951bcca5b4", MAC:"b6:e8:1b:01:98:30", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:40:44.015646 containerd[1881]: 2025-07-15 04:40:44.011 [INFO][5306] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ecee0533e4a229bd76788b28931f5035edb7ece2907af41ac4d3b455e1bebac4" Namespace="kube-system" Pod="coredns-674b8bbfcf-8pnv9" WorkloadEndpoint="ci--4396.0.0--n--16ec4aa50e-k8s-coredns--674b8bbfcf--8pnv9-eth0" Jul 15 04:40:44.091887 containerd[1881]: time="2025-07-15T04:40:44.091769317Z" level=info msg="connecting to shim ecee0533e4a229bd76788b28931f5035edb7ece2907af41ac4d3b455e1bebac4" address="unix:///run/containerd/s/449fd1ca1ace376e2d90e1d19239190335c6b1925604a7c0ae4e8d2b82aed2b5" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:40:44.116862 systemd[1]: Started cri-containerd-ecee0533e4a229bd76788b28931f5035edb7ece2907af41ac4d3b455e1bebac4.scope - libcontainer container ecee0533e4a229bd76788b28931f5035edb7ece2907af41ac4d3b455e1bebac4. Jul 15 04:40:44.154885 containerd[1881]: time="2025-07-15T04:40:44.154848291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8pnv9,Uid:620101e4-1e3b-45da-9e88-0454710f6851,Namespace:kube-system,Attempt:0,} returns sandbox id \"ecee0533e4a229bd76788b28931f5035edb7ece2907af41ac4d3b455e1bebac4\"" Jul 15 04:40:44.166274 containerd[1881]: time="2025-07-15T04:40:44.166244357Z" level=info msg="CreateContainer within sandbox \"ecee0533e4a229bd76788b28931f5035edb7ece2907af41ac4d3b455e1bebac4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 04:40:44.210363 containerd[1881]: time="2025-07-15T04:40:44.209925947Z" level=info msg="Container 433927191fda36fc67bd9b9f4a3e4769983328467e1aadf9e556acd8da48d806: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:40:44.212135 containerd[1881]: time="2025-07-15T04:40:44.212112928Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:44.212896 systemd-networkd[1618]: cali4c243ea490e: Gained IPv6LL Jul 15 04:40:44.215806 containerd[1881]: time="2025-07-15T04:40:44.215785501Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 15 04:40:44.227914 containerd[1881]: time="2025-07-15T04:40:44.227886406Z" level=info msg="CreateContainer within sandbox \"ecee0533e4a229bd76788b28931f5035edb7ece2907af41ac4d3b455e1bebac4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"433927191fda36fc67bd9b9f4a3e4769983328467e1aadf9e556acd8da48d806\"" Jul 15 04:40:44.228256 containerd[1881]: time="2025-07-15T04:40:44.228040707Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:44.230336 containerd[1881]: time="2025-07-15T04:40:44.230314635Z" level=info msg="StartContainer for \"433927191fda36fc67bd9b9f4a3e4769983328467e1aadf9e556acd8da48d806\"" Jul 15 04:40:44.231395 containerd[1881]: time="2025-07-15T04:40:44.231313363Z" level=info msg="connecting to shim 433927191fda36fc67bd9b9f4a3e4769983328467e1aadf9e556acd8da48d806" address="unix:///run/containerd/s/449fd1ca1ace376e2d90e1d19239190335c6b1925604a7c0ae4e8d2b82aed2b5" protocol=ttrpc version=3 Jul 15 04:40:44.235544 containerd[1881]: time="2025-07-15T04:40:44.234649461Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:44.235730 containerd[1881]: time="2025-07-15T04:40:44.235014265Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.485096021s" Jul 15 04:40:44.235850 containerd[1881]: time="2025-07-15T04:40:44.235786369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 15 04:40:44.238528 containerd[1881]: time="2025-07-15T04:40:44.238377196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 04:40:44.247533 containerd[1881]: time="2025-07-15T04:40:44.246842721Z" level=info msg="CreateContainer within sandbox \"329161e1a4116863c36ebe1c1bdf645edeb30bc1844912c4cabc12b306a558c8\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 15 04:40:44.250967 systemd[1]: Started cri-containerd-433927191fda36fc67bd9b9f4a3e4769983328467e1aadf9e556acd8da48d806.scope - libcontainer container 433927191fda36fc67bd9b9f4a3e4769983328467e1aadf9e556acd8da48d806. Jul 15 04:40:44.277027 containerd[1881]: time="2025-07-15T04:40:44.276996064Z" level=info msg="StartContainer for \"433927191fda36fc67bd9b9f4a3e4769983328467e1aadf9e556acd8da48d806\" returns successfully" Jul 15 04:40:44.281738 containerd[1881]: time="2025-07-15T04:40:44.280601698Z" level=info msg="Container e27f09d0dfa2b970de78b8b7fddd64b173a6f05ec4f5f0668e2d59a24659b93c: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:40:44.301207 containerd[1881]: time="2025-07-15T04:40:44.301155208Z" level=info msg="CreateContainer within sandbox \"329161e1a4116863c36ebe1c1bdf645edeb30bc1844912c4cabc12b306a558c8\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e27f09d0dfa2b970de78b8b7fddd64b173a6f05ec4f5f0668e2d59a24659b93c\"" Jul 15 04:40:44.301817 containerd[1881]: time="2025-07-15T04:40:44.301799997Z" level=info msg="StartContainer for \"e27f09d0dfa2b970de78b8b7fddd64b173a6f05ec4f5f0668e2d59a24659b93c\"" Jul 15 04:40:44.302990 containerd[1881]: time="2025-07-15T04:40:44.302942617Z" level=info msg="connecting to shim e27f09d0dfa2b970de78b8b7fddd64b173a6f05ec4f5f0668e2d59a24659b93c" address="unix:///run/containerd/s/04b5ebac4d31e07dea94508dcd98ced6fac86ac4435baf9b4cf9591eaf706a6a" protocol=ttrpc version=3 Jul 15 04:40:44.321856 systemd[1]: Started cri-containerd-e27f09d0dfa2b970de78b8b7fddd64b173a6f05ec4f5f0668e2d59a24659b93c.scope - libcontainer container e27f09d0dfa2b970de78b8b7fddd64b173a6f05ec4f5f0668e2d59a24659b93c. Jul 15 04:40:44.362737 containerd[1881]: time="2025-07-15T04:40:44.362672348Z" level=info msg="StartContainer for \"e27f09d0dfa2b970de78b8b7fddd64b173a6f05ec4f5f0668e2d59a24659b93c\" returns successfully" Jul 15 04:40:44.532855 systemd-networkd[1618]: cali16418757172: Gained IPv6LL Jul 15 04:40:45.055156 kubelet[3314]: I0715 04:40:45.054795 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-8pnv9" podStartSLOduration=43.054774871 podStartE2EDuration="43.054774871s" podCreationTimestamp="2025-07-15 04:40:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:40:45.053618378 +0000 UTC m=+48.251349108" watchObservedRunningTime="2025-07-15 04:40:45.054774871 +0000 UTC m=+48.252505601" Jul 15 04:40:45.172854 systemd-networkd[1618]: calic951bcca5b4: Gained IPv6LL Jul 15 04:40:45.497124 kubelet[3314]: I0715 04:40:45.496982 3314 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 04:40:46.145005 containerd[1881]: time="2025-07-15T04:40:46.144952709Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:46.147422 containerd[1881]: time="2025-07-15T04:40:46.147390483Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 15 04:40:46.154719 containerd[1881]: time="2025-07-15T04:40:46.154685515Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:46.179584 containerd[1881]: time="2025-07-15T04:40:46.179522993Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:46.180385 containerd[1881]: time="2025-07-15T04:40:46.180269560Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 1.941864372s" Jul 15 04:40:46.180385 containerd[1881]: time="2025-07-15T04:40:46.180299145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 15 04:40:46.185021 containerd[1881]: time="2025-07-15T04:40:46.184886787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 15 04:40:46.201412 containerd[1881]: time="2025-07-15T04:40:46.201376128Z" level=info msg="CreateContainer within sandbox \"12dd9623be3266c91f89fb6fd67a3e538b699ac66d7333e3fc20cf9a5686d5c2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 04:40:46.313534 containerd[1881]: time="2025-07-15T04:40:46.312929027Z" level=info msg="Container 498f4a5a917edd782e1c35ee97395c9025339719d977dcf64b75fa99b71569bc: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:40:46.852691 containerd[1881]: time="2025-07-15T04:40:46.852647328Z" level=info msg="CreateContainer within sandbox \"12dd9623be3266c91f89fb6fd67a3e538b699ac66d7333e3fc20cf9a5686d5c2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"498f4a5a917edd782e1c35ee97395c9025339719d977dcf64b75fa99b71569bc\"" Jul 15 04:40:46.853455 containerd[1881]: time="2025-07-15T04:40:46.853378207Z" level=info msg="StartContainer for \"498f4a5a917edd782e1c35ee97395c9025339719d977dcf64b75fa99b71569bc\"" Jul 15 04:40:46.855656 containerd[1881]: time="2025-07-15T04:40:46.855438025Z" level=info msg="connecting to shim 498f4a5a917edd782e1c35ee97395c9025339719d977dcf64b75fa99b71569bc" address="unix:///run/containerd/s/14951364467c92d1b664a2b1afb3fe6dcfc39d352e4a0f17c7f04c964a76fd29" protocol=ttrpc version=3 Jul 15 04:40:46.883887 systemd[1]: Started cri-containerd-498f4a5a917edd782e1c35ee97395c9025339719d977dcf64b75fa99b71569bc.scope - libcontainer container 498f4a5a917edd782e1c35ee97395c9025339719d977dcf64b75fa99b71569bc. Jul 15 04:40:46.953574 systemd-networkd[1618]: vxlan.calico: Link UP Jul 15 04:40:46.953582 systemd-networkd[1618]: vxlan.calico: Gained carrier Jul 15 04:40:46.964230 containerd[1881]: time="2025-07-15T04:40:46.964055583Z" level=info msg="StartContainer for \"498f4a5a917edd782e1c35ee97395c9025339719d977dcf64b75fa99b71569bc\" returns successfully" Jul 15 04:40:48.032299 kubelet[3314]: I0715 04:40:48.031879 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-664ff4b8b7-qj8jc" podStartSLOduration=30.178483481 podStartE2EDuration="35.031863494s" podCreationTimestamp="2025-07-15 04:40:13 +0000 UTC" firstStartedPulling="2025-07-15 04:40:41.330803808 +0000 UTC m=+44.528534538" lastFinishedPulling="2025-07-15 04:40:46.184183821 +0000 UTC m=+49.381914551" observedRunningTime="2025-07-15 04:40:47.069752664 +0000 UTC m=+50.267483466" watchObservedRunningTime="2025-07-15 04:40:48.031863494 +0000 UTC m=+51.229594224" Jul 15 04:40:48.308932 systemd-networkd[1618]: vxlan.calico: Gained IPv6LL Jul 15 04:40:49.268561 containerd[1881]: time="2025-07-15T04:40:49.268068488Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:49.271022 containerd[1881]: time="2025-07-15T04:40:49.270997485Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 15 04:40:49.275227 containerd[1881]: time="2025-07-15T04:40:49.275203499Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:49.281449 containerd[1881]: time="2025-07-15T04:40:49.281420816Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:49.281703 containerd[1881]: time="2025-07-15T04:40:49.281673832Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 3.096762572s" Jul 15 04:40:49.281765 containerd[1881]: time="2025-07-15T04:40:49.281703681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 15 04:40:49.282797 containerd[1881]: time="2025-07-15T04:40:49.282757987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 15 04:40:49.302609 containerd[1881]: time="2025-07-15T04:40:49.302528392Z" level=info msg="CreateContainer within sandbox \"dd746d05f7b3205cbd3be2904cede729bffe9d1456c97334d94769c9dd59a7c1\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 15 04:40:49.333658 containerd[1881]: time="2025-07-15T04:40:49.333547882Z" level=info msg="Container 716bad89080497c66842c9c7525d3e5cc8cc2d8a3939d7d43bc330eec7c02ca1: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:40:49.350477 containerd[1881]: time="2025-07-15T04:40:49.350436667Z" level=info msg="CreateContainer within sandbox \"dd746d05f7b3205cbd3be2904cede729bffe9d1456c97334d94769c9dd59a7c1\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"716bad89080497c66842c9c7525d3e5cc8cc2d8a3939d7d43bc330eec7c02ca1\"" Jul 15 04:40:49.351048 containerd[1881]: time="2025-07-15T04:40:49.351021990Z" level=info msg="StartContainer for \"716bad89080497c66842c9c7525d3e5cc8cc2d8a3939d7d43bc330eec7c02ca1\"" Jul 15 04:40:49.352155 containerd[1881]: time="2025-07-15T04:40:49.352100024Z" level=info msg="connecting to shim 716bad89080497c66842c9c7525d3e5cc8cc2d8a3939d7d43bc330eec7c02ca1" address="unix:///run/containerd/s/348b2c088c4d8eeb0a24d87bc2b2b40b028560b1b373dc1195a07b12466628a5" protocol=ttrpc version=3 Jul 15 04:40:49.397833 systemd[1]: Started cri-containerd-716bad89080497c66842c9c7525d3e5cc8cc2d8a3939d7d43bc330eec7c02ca1.scope - libcontainer container 716bad89080497c66842c9c7525d3e5cc8cc2d8a3939d7d43bc330eec7c02ca1. Jul 15 04:40:49.436601 containerd[1881]: time="2025-07-15T04:40:49.436384369Z" level=info msg="StartContainer for \"716bad89080497c66842c9c7525d3e5cc8cc2d8a3939d7d43bc330eec7c02ca1\" returns successfully" Jul 15 04:40:50.104556 containerd[1881]: time="2025-07-15T04:40:50.103137653Z" level=info msg="TaskExit event in podsandbox handler container_id:\"716bad89080497c66842c9c7525d3e5cc8cc2d8a3939d7d43bc330eec7c02ca1\" id:\"9f878183d450475ed86d1a8788965267485ad0e0e4e5d4dd04aaf6bf465e1608\" pid:5746 exited_at:{seconds:1752554450 nanos:94225451}" Jul 15 04:40:50.119311 kubelet[3314]: I0715 04:40:50.119061 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-79d7f47d74-kwq9v" podStartSLOduration=25.284170076 podStartE2EDuration="33.119045845s" podCreationTimestamp="2025-07-15 04:40:17 +0000 UTC" firstStartedPulling="2025-07-15 04:40:41.447536703 +0000 UTC m=+44.645267433" lastFinishedPulling="2025-07-15 04:40:49.282412472 +0000 UTC m=+52.480143202" observedRunningTime="2025-07-15 04:40:50.073756483 +0000 UTC m=+53.271487261" watchObservedRunningTime="2025-07-15 04:40:50.119045845 +0000 UTC m=+53.316776575" Jul 15 04:40:52.409011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3148804088.mount: Deactivated successfully. Jul 15 04:40:52.983106 containerd[1881]: time="2025-07-15T04:40:52.982552671Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:52.985454 containerd[1881]: time="2025-07-15T04:40:52.985427794Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 15 04:40:52.992507 containerd[1881]: time="2025-07-15T04:40:52.992479425Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:52.998279 containerd[1881]: time="2025-07-15T04:40:52.998226887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:52.998824 containerd[1881]: time="2025-07-15T04:40:52.998783961Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 3.715997628s" Jul 15 04:40:52.998973 containerd[1881]: time="2025-07-15T04:40:52.998811417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 15 04:40:53.001234 containerd[1881]: time="2025-07-15T04:40:53.001129291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 04:40:53.011910 containerd[1881]: time="2025-07-15T04:40:53.011829701Z" level=info msg="CreateContainer within sandbox \"c2688c093bdeca52994bdef78aa98c22d0de5669901491168c85355ad585a325\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 15 04:40:53.043407 containerd[1881]: time="2025-07-15T04:40:53.042855683Z" level=info msg="Container 9d1d7b62e0843c098b372d001791b89876f6d1cd4b4b28553459203f159c695d: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:40:53.065427 containerd[1881]: time="2025-07-15T04:40:53.065371596Z" level=info msg="CreateContainer within sandbox \"c2688c093bdeca52994bdef78aa98c22d0de5669901491168c85355ad585a325\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"9d1d7b62e0843c098b372d001791b89876f6d1cd4b4b28553459203f159c695d\"" Jul 15 04:40:53.067087 containerd[1881]: time="2025-07-15T04:40:53.067060521Z" level=info msg="StartContainer for \"9d1d7b62e0843c098b372d001791b89876f6d1cd4b4b28553459203f159c695d\"" Jul 15 04:40:53.069034 containerd[1881]: time="2025-07-15T04:40:53.069007959Z" level=info msg="connecting to shim 9d1d7b62e0843c098b372d001791b89876f6d1cd4b4b28553459203f159c695d" address="unix:///run/containerd/s/cc55e75bf1e4008df0b22b710364c6d757928c901f4cb725efc67d7b43feff3e" protocol=ttrpc version=3 Jul 15 04:40:53.088836 systemd[1]: Started cri-containerd-9d1d7b62e0843c098b372d001791b89876f6d1cd4b4b28553459203f159c695d.scope - libcontainer container 9d1d7b62e0843c098b372d001791b89876f6d1cd4b4b28553459203f159c695d. Jul 15 04:40:53.133037 containerd[1881]: time="2025-07-15T04:40:53.132998144Z" level=info msg="StartContainer for \"9d1d7b62e0843c098b372d001791b89876f6d1cd4b4b28553459203f159c695d\" returns successfully" Jul 15 04:40:53.332769 containerd[1881]: time="2025-07-15T04:40:53.332249809Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:53.336445 containerd[1881]: time="2025-07-15T04:40:53.336419829Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 15 04:40:53.337683 containerd[1881]: time="2025-07-15T04:40:53.337658764Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 336.500009ms" Jul 15 04:40:53.337803 containerd[1881]: time="2025-07-15T04:40:53.337788681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 15 04:40:53.338910 containerd[1881]: time="2025-07-15T04:40:53.338857354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 15 04:40:53.346191 containerd[1881]: time="2025-07-15T04:40:53.346153617Z" level=info msg="CreateContainer within sandbox \"1ccf258ba3d573296324c22113a71d2c831ae6b134792bba41a3c3804ff37191\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 04:40:53.391546 containerd[1881]: time="2025-07-15T04:40:53.389682971Z" level=info msg="Container 3c543bfa1f628da0f744c852318ed039d58a83192b91a62fbd0ce36713bd9d05: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:40:53.415055 containerd[1881]: time="2025-07-15T04:40:53.415014332Z" level=info msg="CreateContainer within sandbox \"1ccf258ba3d573296324c22113a71d2c831ae6b134792bba41a3c3804ff37191\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3c543bfa1f628da0f744c852318ed039d58a83192b91a62fbd0ce36713bd9d05\"" Jul 15 04:40:53.415828 containerd[1881]: time="2025-07-15T04:40:53.415698058Z" level=info msg="StartContainer for \"3c543bfa1f628da0f744c852318ed039d58a83192b91a62fbd0ce36713bd9d05\"" Jul 15 04:40:53.416731 containerd[1881]: time="2025-07-15T04:40:53.416608623Z" level=info msg="connecting to shim 3c543bfa1f628da0f744c852318ed039d58a83192b91a62fbd0ce36713bd9d05" address="unix:///run/containerd/s/247ca9e4ce936dbf0a6746842db6cd54aa3ed04fc5dafb44fc09a160ef08c3de" protocol=ttrpc version=3 Jul 15 04:40:53.433832 systemd[1]: Started cri-containerd-3c543bfa1f628da0f744c852318ed039d58a83192b91a62fbd0ce36713bd9d05.scope - libcontainer container 3c543bfa1f628da0f744c852318ed039d58a83192b91a62fbd0ce36713bd9d05. Jul 15 04:40:53.468050 containerd[1881]: time="2025-07-15T04:40:53.468012905Z" level=info msg="StartContainer for \"3c543bfa1f628da0f744c852318ed039d58a83192b91a62fbd0ce36713bd9d05\" returns successfully" Jul 15 04:40:54.105509 kubelet[3314]: I0715 04:40:54.105452 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-664ff4b8b7-wb4g9" podStartSLOduration=30.891597015 podStartE2EDuration="41.105437845s" podCreationTimestamp="2025-07-15 04:40:13 +0000 UTC" firstStartedPulling="2025-07-15 04:40:43.124627536 +0000 UTC m=+46.322358266" lastFinishedPulling="2025-07-15 04:40:53.338468366 +0000 UTC m=+56.536199096" observedRunningTime="2025-07-15 04:40:54.086153107 +0000 UTC m=+57.283883869" watchObservedRunningTime="2025-07-15 04:40:54.105437845 +0000 UTC m=+57.303168575" Jul 15 04:40:54.107765 kubelet[3314]: I0715 04:40:54.107726 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-j8nkz" podStartSLOduration=26.917852716 podStartE2EDuration="37.107714213s" podCreationTimestamp="2025-07-15 04:40:17 +0000 UTC" firstStartedPulling="2025-07-15 04:40:42.809751978 +0000 UTC m=+46.007482708" lastFinishedPulling="2025-07-15 04:40:52.999613475 +0000 UTC m=+56.197344205" observedRunningTime="2025-07-15 04:40:54.106652499 +0000 UTC m=+57.304383301" watchObservedRunningTime="2025-07-15 04:40:54.107714213 +0000 UTC m=+57.305444943" Jul 15 04:40:54.165458 containerd[1881]: time="2025-07-15T04:40:54.165011018Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9d1d7b62e0843c098b372d001791b89876f6d1cd4b4b28553459203f159c695d\" id:\"4d765c0d6379d563c3135784f9bd01b1c402f31c58728d71c7a7008c77afa497\" pid:5853 exit_status:1 exited_at:{seconds:1752554454 nanos:164340733}" Jul 15 04:40:55.073990 kubelet[3314]: I0715 04:40:55.073889 3314 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 04:40:55.150920 containerd[1881]: time="2025-07-15T04:40:55.150859136Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9d1d7b62e0843c098b372d001791b89876f6d1cd4b4b28553459203f159c695d\" id:\"9bce858ea386a8ca4c82ed66c1321c17b1f11296a8bec244646b68dcb4bf86de\" pid:5883 exit_status:1 exited_at:{seconds:1752554455 nanos:150557814}" Jul 15 04:40:55.251753 containerd[1881]: time="2025-07-15T04:40:55.251221056Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:55.254436 containerd[1881]: time="2025-07-15T04:40:55.254392300Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 15 04:40:55.262702 containerd[1881]: time="2025-07-15T04:40:55.262637801Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:55.268590 containerd[1881]: time="2025-07-15T04:40:55.268534204Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:55.269110 containerd[1881]: time="2025-07-15T04:40:55.268920584Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.930038085s" Jul 15 04:40:55.269110 containerd[1881]: time="2025-07-15T04:40:55.268952537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 15 04:40:55.276836 containerd[1881]: time="2025-07-15T04:40:55.276801521Z" level=info msg="CreateContainer within sandbox \"329161e1a4116863c36ebe1c1bdf645edeb30bc1844912c4cabc12b306a558c8\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 15 04:40:55.310740 containerd[1881]: time="2025-07-15T04:40:55.308968883Z" level=info msg="Container a0bb3564351d02a52fa93fb7f75c4274bba3965f4e1235fb0f81cc28b46d95ae: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:40:55.315486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2096064308.mount: Deactivated successfully. Jul 15 04:40:55.331645 containerd[1881]: time="2025-07-15T04:40:55.331456995Z" level=info msg="CreateContainer within sandbox \"329161e1a4116863c36ebe1c1bdf645edeb30bc1844912c4cabc12b306a558c8\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a0bb3564351d02a52fa93fb7f75c4274bba3965f4e1235fb0f81cc28b46d95ae\"" Jul 15 04:40:55.332694 containerd[1881]: time="2025-07-15T04:40:55.332409689Z" level=info msg="StartContainer for \"a0bb3564351d02a52fa93fb7f75c4274bba3965f4e1235fb0f81cc28b46d95ae\"" Jul 15 04:40:55.334929 containerd[1881]: time="2025-07-15T04:40:55.334894448Z" level=info msg="connecting to shim a0bb3564351d02a52fa93fb7f75c4274bba3965f4e1235fb0f81cc28b46d95ae" address="unix:///run/containerd/s/04b5ebac4d31e07dea94508dcd98ced6fac86ac4435baf9b4cf9591eaf706a6a" protocol=ttrpc version=3 Jul 15 04:40:55.362912 systemd[1]: Started cri-containerd-a0bb3564351d02a52fa93fb7f75c4274bba3965f4e1235fb0f81cc28b46d95ae.scope - libcontainer container a0bb3564351d02a52fa93fb7f75c4274bba3965f4e1235fb0f81cc28b46d95ae. Jul 15 04:40:55.394649 containerd[1881]: time="2025-07-15T04:40:55.394609337Z" level=info msg="StartContainer for \"a0bb3564351d02a52fa93fb7f75c4274bba3965f4e1235fb0f81cc28b46d95ae\" returns successfully" Jul 15 04:40:55.983358 kubelet[3314]: I0715 04:40:55.983288 3314 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 15 04:40:55.985965 kubelet[3314]: I0715 04:40:55.985947 3314 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 15 04:40:56.141447 containerd[1881]: time="2025-07-15T04:40:56.141392657Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9d1d7b62e0843c098b372d001791b89876f6d1cd4b4b28553459203f159c695d\" id:\"d47ed92f695d09f8affd6098d50450f0d0f04e522d8cee2ca277018c89f4664f\" pid:5939 exit_status:1 exited_at:{seconds:1752554456 nanos:140932539}" Jul 15 04:41:07.059252 containerd[1881]: time="2025-07-15T04:41:07.059196628Z" level=info msg="TaskExit event in podsandbox handler container_id:\"caae5995f17c26e5096c534d25ca03902b4763e6d45ef0b01174f6d28aac0956\" id:\"32f880228deff8820a1b4a0b4c2f57b1f7e4b1fb1a323061d01e720cd571c659\" pid:5978 exited_at:{seconds:1752554467 nanos:58833064}" Jul 15 04:41:07.076804 kubelet[3314]: I0715 04:41:07.076735 3314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-qm58g" podStartSLOduration=34.903503678999996 podStartE2EDuration="50.076698009s" podCreationTimestamp="2025-07-15 04:40:17 +0000 UTC" firstStartedPulling="2025-07-15 04:40:40.096649139 +0000 UTC m=+43.294379869" lastFinishedPulling="2025-07-15 04:40:55.269843477 +0000 UTC m=+58.467574199" observedRunningTime="2025-07-15 04:40:56.101130895 +0000 UTC m=+59.298861625" watchObservedRunningTime="2025-07-15 04:41:07.076698009 +0000 UTC m=+70.274428739" Jul 15 04:41:08.832803 kubelet[3314]: I0715 04:41:08.832691 3314 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 04:41:20.096444 containerd[1881]: time="2025-07-15T04:41:20.096235860Z" level=info msg="TaskExit event in podsandbox handler container_id:\"716bad89080497c66842c9c7525d3e5cc8cc2d8a3939d7d43bc330eec7c02ca1\" id:\"b138a9b2e7e0b36af6cc8420a9ed4f0345ac325b88cc00836845fe6d5d5541eb\" pid:6014 exited_at:{seconds:1752554480 nanos:96066758}" Jul 15 04:41:26.141629 containerd[1881]: time="2025-07-15T04:41:26.141569259Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9d1d7b62e0843c098b372d001791b89876f6d1cd4b4b28553459203f159c695d\" id:\"85490b18e1c797bf2d5cc7765b8cfb9cb1591bdf4f5b2d75a734c5359b7c95ba\" pid:6034 exited_at:{seconds:1752554486 nanos:141256080}" Jul 15 04:41:33.586371 containerd[1881]: time="2025-07-15T04:41:33.586328358Z" level=info msg="TaskExit event in podsandbox handler container_id:\"716bad89080497c66842c9c7525d3e5cc8cc2d8a3939d7d43bc330eec7c02ca1\" id:\"deb3ec988e1a560397f881f6dc40bc21dfe2168a323175355a8583a425218363\" pid:6069 exited_at:{seconds:1752554493 nanos:585882352}" Jul 15 04:41:36.398639 containerd[1881]: time="2025-07-15T04:41:36.398583239Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9d1d7b62e0843c098b372d001791b89876f6d1cd4b4b28553459203f159c695d\" id:\"6742082c90f213126eeed11dc9b517744aabcc07b365ef8aac15ff3bc05f8269\" pid:6090 exited_at:{seconds:1752554496 nanos:398309582}" Jul 15 04:41:37.063622 containerd[1881]: time="2025-07-15T04:41:37.063192465Z" level=info msg="TaskExit event in podsandbox handler container_id:\"caae5995f17c26e5096c534d25ca03902b4763e6d45ef0b01174f6d28aac0956\" id:\"0d102a95314190d8580b0343d44098deb5adb2bc4634f45ca7d577cc29635898\" pid:6112 exited_at:{seconds:1752554497 nanos:62894191}" Jul 15 04:41:44.184034 systemd[1]: Started sshd@7-10.200.20.37:22-10.200.16.10:38540.service - OpenSSH per-connection server daemon (10.200.16.10:38540). Jul 15 04:41:44.681961 sshd[6128]: Accepted publickey for core from 10.200.16.10 port 38540 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:41:44.683393 sshd-session[6128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:41:44.687033 systemd-logind[1861]: New session 10 of user core. Jul 15 04:41:44.692950 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 15 04:41:45.085718 sshd[6131]: Connection closed by 10.200.16.10 port 38540 Jul 15 04:41:45.086943 sshd-session[6128]: pam_unix(sshd:session): session closed for user core Jul 15 04:41:45.089952 systemd-logind[1861]: Session 10 logged out. Waiting for processes to exit. Jul 15 04:41:45.090354 systemd[1]: sshd@7-10.200.20.37:22-10.200.16.10:38540.service: Deactivated successfully. Jul 15 04:41:45.093003 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 04:41:45.095239 systemd-logind[1861]: Removed session 10. Jul 15 04:41:50.086313 containerd[1881]: time="2025-07-15T04:41:50.086274994Z" level=info msg="TaskExit event in podsandbox handler container_id:\"716bad89080497c66842c9c7525d3e5cc8cc2d8a3939d7d43bc330eec7c02ca1\" id:\"a3722da0bba24543ec6fadc8e7a795149de9788904976b7c371076309e99a083\" pid:6156 exited_at:{seconds:1752554510 nanos:85975224}" Jul 15 04:41:50.174931 systemd[1]: Started sshd@8-10.200.20.37:22-10.200.16.10:34170.service - OpenSSH per-connection server daemon (10.200.16.10:34170). Jul 15 04:41:50.654332 sshd[6166]: Accepted publickey for core from 10.200.16.10 port 34170 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:41:50.655394 sshd-session[6166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:41:50.659013 systemd-logind[1861]: New session 11 of user core. Jul 15 04:41:50.664827 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 15 04:41:51.062018 sshd[6169]: Connection closed by 10.200.16.10 port 34170 Jul 15 04:41:51.061932 sshd-session[6166]: pam_unix(sshd:session): session closed for user core Jul 15 04:41:51.067355 systemd[1]: sshd@8-10.200.20.37:22-10.200.16.10:34170.service: Deactivated successfully. Jul 15 04:41:51.070878 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 04:41:51.072384 systemd-logind[1861]: Session 11 logged out. Waiting for processes to exit. Jul 15 04:41:51.075137 systemd-logind[1861]: Removed session 11. Jul 15 04:41:56.132393 containerd[1881]: time="2025-07-15T04:41:56.132348735Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9d1d7b62e0843c098b372d001791b89876f6d1cd4b4b28553459203f159c695d\" id:\"2dbb57758a789605773cbfe07b66161dab906f74c1c2522cc30771097aa43fd5\" pid:6193 exited_at:{seconds:1752554516 nanos:131957546}" Jul 15 04:41:56.161978 systemd[1]: Started sshd@9-10.200.20.37:22-10.200.16.10:34180.service - OpenSSH per-connection server daemon (10.200.16.10:34180). Jul 15 04:41:56.615380 sshd[6203]: Accepted publickey for core from 10.200.16.10 port 34180 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:41:56.616485 sshd-session[6203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:41:56.619993 systemd-logind[1861]: New session 12 of user core. Jul 15 04:41:56.627838 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 15 04:41:56.988812 sshd[6206]: Connection closed by 10.200.16.10 port 34180 Jul 15 04:41:56.989443 sshd-session[6203]: pam_unix(sshd:session): session closed for user core Jul 15 04:41:56.993827 systemd[1]: sshd@9-10.200.20.37:22-10.200.16.10:34180.service: Deactivated successfully. Jul 15 04:41:56.995793 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 04:41:56.996893 systemd-logind[1861]: Session 12 logged out. Waiting for processes to exit. Jul 15 04:41:56.998774 systemd-logind[1861]: Removed session 12. Jul 15 04:41:57.075765 systemd[1]: Started sshd@10-10.200.20.37:22-10.200.16.10:34184.service - OpenSSH per-connection server daemon (10.200.16.10:34184). Jul 15 04:41:57.533634 sshd[6221]: Accepted publickey for core from 10.200.16.10 port 34184 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:41:57.534827 sshd-session[6221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:41:57.538665 systemd-logind[1861]: New session 13 of user core. Jul 15 04:41:57.548839 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 15 04:41:57.943939 sshd[6224]: Connection closed by 10.200.16.10 port 34184 Jul 15 04:41:57.943474 sshd-session[6221]: pam_unix(sshd:session): session closed for user core Jul 15 04:41:57.946821 systemd[1]: sshd@10-10.200.20.37:22-10.200.16.10:34184.service: Deactivated successfully. Jul 15 04:41:57.948493 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 04:41:57.949183 systemd-logind[1861]: Session 13 logged out. Waiting for processes to exit. Jul 15 04:41:57.950383 systemd-logind[1861]: Removed session 13. Jul 15 04:41:58.025212 systemd[1]: Started sshd@11-10.200.20.37:22-10.200.16.10:34194.service - OpenSSH per-connection server daemon (10.200.16.10:34194). Jul 15 04:41:58.484163 sshd[6233]: Accepted publickey for core from 10.200.16.10 port 34194 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:41:58.485279 sshd-session[6233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:41:58.489743 systemd-logind[1861]: New session 14 of user core. Jul 15 04:41:58.495943 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 15 04:41:58.875818 sshd[6240]: Connection closed by 10.200.16.10 port 34194 Jul 15 04:41:58.876596 sshd-session[6233]: pam_unix(sshd:session): session closed for user core Jul 15 04:41:58.880021 systemd-logind[1861]: Session 14 logged out. Waiting for processes to exit. Jul 15 04:41:58.880598 systemd[1]: sshd@11-10.200.20.37:22-10.200.16.10:34194.service: Deactivated successfully. Jul 15 04:41:58.882824 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 04:41:58.884556 systemd-logind[1861]: Removed session 14. Jul 15 04:42:03.965903 systemd[1]: Started sshd@12-10.200.20.37:22-10.200.16.10:44562.service - OpenSSH per-connection server daemon (10.200.16.10:44562). Jul 15 04:42:04.426834 sshd[6253]: Accepted publickey for core from 10.200.16.10 port 44562 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:42:04.427994 sshd-session[6253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:04.431469 systemd-logind[1861]: New session 15 of user core. Jul 15 04:42:04.438999 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 15 04:42:04.810558 sshd[6256]: Connection closed by 10.200.16.10 port 44562 Jul 15 04:42:04.811109 sshd-session[6253]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:04.814242 systemd[1]: sshd@12-10.200.20.37:22-10.200.16.10:44562.service: Deactivated successfully. Jul 15 04:42:04.817082 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 04:42:04.817785 systemd-logind[1861]: Session 15 logged out. Waiting for processes to exit. Jul 15 04:42:04.818982 systemd-logind[1861]: Removed session 15. Jul 15 04:42:04.893369 systemd[1]: Started sshd@13-10.200.20.37:22-10.200.16.10:44566.service - OpenSSH per-connection server daemon (10.200.16.10:44566). Jul 15 04:42:05.350610 sshd[6268]: Accepted publickey for core from 10.200.16.10 port 44566 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:42:05.351689 sshd-session[6268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:05.355543 systemd-logind[1861]: New session 16 of user core. Jul 15 04:42:05.364852 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 15 04:42:05.804375 sshd[6271]: Connection closed by 10.200.16.10 port 44566 Jul 15 04:42:05.805097 sshd-session[6268]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:05.807836 systemd-logind[1861]: Session 16 logged out. Waiting for processes to exit. Jul 15 04:42:05.809408 systemd[1]: sshd@13-10.200.20.37:22-10.200.16.10:44566.service: Deactivated successfully. Jul 15 04:42:05.811585 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 04:42:05.813975 systemd-logind[1861]: Removed session 16. Jul 15 04:42:05.890899 systemd[1]: Started sshd@14-10.200.20.37:22-10.200.16.10:44572.service - OpenSSH per-connection server daemon (10.200.16.10:44572). Jul 15 04:42:06.343521 sshd[6280]: Accepted publickey for core from 10.200.16.10 port 44572 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:42:06.344644 sshd-session[6280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:06.348392 systemd-logind[1861]: New session 17 of user core. Jul 15 04:42:06.356828 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 15 04:42:07.053922 containerd[1881]: time="2025-07-15T04:42:07.053665500Z" level=info msg="TaskExit event in podsandbox handler container_id:\"caae5995f17c26e5096c534d25ca03902b4763e6d45ef0b01174f6d28aac0956\" id:\"0e86c3633b27c523963bbf25fcfb8196f0f50fc984271d5d01adcc30fa5f2433\" pid:6300 exited_at:{seconds:1752554527 nanos:53394811}" Jul 15 04:42:07.395061 sshd[6283]: Connection closed by 10.200.16.10 port 44572 Jul 15 04:42:07.395431 sshd-session[6280]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:07.398518 systemd[1]: sshd@14-10.200.20.37:22-10.200.16.10:44572.service: Deactivated successfully. Jul 15 04:42:07.400181 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 04:42:07.401692 systemd-logind[1861]: Session 17 logged out. Waiting for processes to exit. Jul 15 04:42:07.403097 systemd-logind[1861]: Removed session 17. Jul 15 04:42:07.475053 systemd[1]: Started sshd@15-10.200.20.37:22-10.200.16.10:44576.service - OpenSSH per-connection server daemon (10.200.16.10:44576). Jul 15 04:42:07.929781 sshd[6331]: Accepted publickey for core from 10.200.16.10 port 44576 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:42:07.931124 sshd-session[6331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:07.934865 systemd-logind[1861]: New session 18 of user core. Jul 15 04:42:07.938875 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 15 04:42:08.383216 sshd[6334]: Connection closed by 10.200.16.10 port 44576 Jul 15 04:42:08.382495 sshd-session[6331]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:08.385392 systemd[1]: sshd@15-10.200.20.37:22-10.200.16.10:44576.service: Deactivated successfully. Jul 15 04:42:08.387072 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 04:42:08.388627 systemd-logind[1861]: Session 18 logged out. Waiting for processes to exit. Jul 15 04:42:08.389441 systemd-logind[1861]: Removed session 18. Jul 15 04:42:08.464410 systemd[1]: Started sshd@16-10.200.20.37:22-10.200.16.10:44584.service - OpenSSH per-connection server daemon (10.200.16.10:44584). Jul 15 04:42:08.923329 sshd[6343]: Accepted publickey for core from 10.200.16.10 port 44584 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:42:08.924945 sshd-session[6343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:08.929063 systemd-logind[1861]: New session 19 of user core. Jul 15 04:42:08.931834 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 15 04:42:09.307217 sshd[6346]: Connection closed by 10.200.16.10 port 44584 Jul 15 04:42:09.307800 sshd-session[6343]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:09.311547 systemd-logind[1861]: Session 19 logged out. Waiting for processes to exit. Jul 15 04:42:09.312097 systemd[1]: sshd@16-10.200.20.37:22-10.200.16.10:44584.service: Deactivated successfully. Jul 15 04:42:09.313852 systemd[1]: session-19.scope: Deactivated successfully. Jul 15 04:42:09.316187 systemd-logind[1861]: Removed session 19. Jul 15 04:42:14.390902 systemd[1]: Started sshd@17-10.200.20.37:22-10.200.16.10:41586.service - OpenSSH per-connection server daemon (10.200.16.10:41586). Jul 15 04:42:14.845138 sshd[6360]: Accepted publickey for core from 10.200.16.10 port 41586 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:42:14.846295 sshd-session[6360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:14.850012 systemd-logind[1861]: New session 20 of user core. Jul 15 04:42:14.856823 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 15 04:42:15.214571 sshd[6363]: Connection closed by 10.200.16.10 port 41586 Jul 15 04:42:15.215067 sshd-session[6360]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:15.218798 systemd-logind[1861]: Session 20 logged out. Waiting for processes to exit. Jul 15 04:42:15.219210 systemd[1]: sshd@17-10.200.20.37:22-10.200.16.10:41586.service: Deactivated successfully. Jul 15 04:42:15.222131 systemd[1]: session-20.scope: Deactivated successfully. Jul 15 04:42:15.223488 systemd-logind[1861]: Removed session 20. Jul 15 04:42:20.169524 containerd[1881]: time="2025-07-15T04:42:20.169471390Z" level=info msg="TaskExit event in podsandbox handler container_id:\"716bad89080497c66842c9c7525d3e5cc8cc2d8a3939d7d43bc330eec7c02ca1\" id:\"615f491ced5966986cf5ae0ed6ebd57d07de8921165b999874f848b7455d32de\" pid:6392 exited_at:{seconds:1752554540 nanos:169246494}" Jul 15 04:42:20.296122 systemd[1]: Started sshd@18-10.200.20.37:22-10.200.16.10:49610.service - OpenSSH per-connection server daemon (10.200.16.10:49610). Jul 15 04:42:20.753420 sshd[6402]: Accepted publickey for core from 10.200.16.10 port 49610 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:42:20.754536 sshd-session[6402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:20.758649 systemd-logind[1861]: New session 21 of user core. Jul 15 04:42:20.764832 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 15 04:42:21.127983 sshd[6405]: Connection closed by 10.200.16.10 port 49610 Jul 15 04:42:21.128547 sshd-session[6402]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:21.131596 systemd[1]: sshd@18-10.200.20.37:22-10.200.16.10:49610.service: Deactivated successfully. Jul 15 04:42:21.133112 systemd[1]: session-21.scope: Deactivated successfully. Jul 15 04:42:21.133762 systemd-logind[1861]: Session 21 logged out. Waiting for processes to exit. Jul 15 04:42:21.135071 systemd-logind[1861]: Removed session 21. Jul 15 04:42:26.129596 containerd[1881]: time="2025-07-15T04:42:26.129540406Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9d1d7b62e0843c098b372d001791b89876f6d1cd4b4b28553459203f159c695d\" id:\"0a02273918625d0a83137183983bb83cb2fd96fba6b26fae8b5307ae2c98a976\" pid:6443 exited_at:{seconds:1752554546 nanos:129144889}" Jul 15 04:42:26.220568 systemd[1]: Started sshd@19-10.200.20.37:22-10.200.16.10:49614.service - OpenSSH per-connection server daemon (10.200.16.10:49614). Jul 15 04:42:26.715618 sshd[6454]: Accepted publickey for core from 10.200.16.10 port 49614 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:42:26.716562 sshd-session[6454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:26.720621 systemd-logind[1861]: New session 22 of user core. Jul 15 04:42:26.727831 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 15 04:42:27.108775 sshd[6457]: Connection closed by 10.200.16.10 port 49614 Jul 15 04:42:27.109304 sshd-session[6454]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:27.112516 systemd[1]: sshd@19-10.200.20.37:22-10.200.16.10:49614.service: Deactivated successfully. Jul 15 04:42:27.114205 systemd[1]: session-22.scope: Deactivated successfully. Jul 15 04:42:27.114846 systemd-logind[1861]: Session 22 logged out. Waiting for processes to exit. Jul 15 04:42:27.116434 systemd-logind[1861]: Removed session 22. Jul 15 04:42:32.203181 systemd[1]: Started sshd@20-10.200.20.37:22-10.200.16.10:59664.service - OpenSSH per-connection server daemon (10.200.16.10:59664). Jul 15 04:42:32.697776 sshd[6468]: Accepted publickey for core from 10.200.16.10 port 59664 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:42:32.699681 sshd-session[6468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:32.705954 systemd-logind[1861]: New session 23 of user core. Jul 15 04:42:32.713853 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 15 04:42:33.100309 sshd[6471]: Connection closed by 10.200.16.10 port 59664 Jul 15 04:42:33.100914 sshd-session[6468]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:33.105496 systemd[1]: sshd@20-10.200.20.37:22-10.200.16.10:59664.service: Deactivated successfully. Jul 15 04:42:33.108689 systemd[1]: session-23.scope: Deactivated successfully. Jul 15 04:42:33.109859 systemd-logind[1861]: Session 23 logged out. Waiting for processes to exit. Jul 15 04:42:33.111402 systemd-logind[1861]: Removed session 23. Jul 15 04:42:33.589323 containerd[1881]: time="2025-07-15T04:42:33.589281156Z" level=info msg="TaskExit event in podsandbox handler container_id:\"716bad89080497c66842c9c7525d3e5cc8cc2d8a3939d7d43bc330eec7c02ca1\" id:\"c99c42453edd897379e34efb491a54f82045fde97d453d4a023da84cb4e4c0e1\" pid:6495 exited_at:{seconds:1752554553 nanos:589081405}" Jul 15 04:42:36.334403 containerd[1881]: time="2025-07-15T04:42:36.334361000Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9d1d7b62e0843c098b372d001791b89876f6d1cd4b4b28553459203f159c695d\" id:\"39b97cd6ed538cf7224e0a27656517ff85dc5270377cbddbe62fef1af2f7fb3c\" pid:6516 exited_at:{seconds:1752554556 nanos:334001261}" Jul 15 04:42:37.048594 containerd[1881]: time="2025-07-15T04:42:37.048474458Z" level=info msg="TaskExit event in podsandbox handler container_id:\"caae5995f17c26e5096c534d25ca03902b4763e6d45ef0b01174f6d28aac0956\" id:\"1d48351db16e668c95b8146e7dad99324a885598453ba34b9965b66e462a064e\" pid:6538 exited_at:{seconds:1752554557 nanos:48172424}" Jul 15 04:42:38.187175 systemd[1]: Started sshd@21-10.200.20.37:22-10.200.16.10:59666.service - OpenSSH per-connection server daemon (10.200.16.10:59666). Jul 15 04:42:38.665094 sshd[6550]: Accepted publickey for core from 10.200.16.10 port 59666 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:42:38.666199 sshd-session[6550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:38.672199 systemd-logind[1861]: New session 24 of user core. Jul 15 04:42:38.675852 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 15 04:42:39.049312 sshd[6553]: Connection closed by 10.200.16.10 port 59666 Jul 15 04:42:39.049898 sshd-session[6550]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:39.052554 systemd-logind[1861]: Session 24 logged out. Waiting for processes to exit. Jul 15 04:42:39.053248 systemd[1]: sshd@21-10.200.20.37:22-10.200.16.10:59666.service: Deactivated successfully. Jul 15 04:42:39.054677 systemd[1]: session-24.scope: Deactivated successfully. Jul 15 04:42:39.055699 systemd-logind[1861]: Removed session 24.