Jul 15 04:39:19.099414 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Jul 15 04:39:19.099433 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue Jul 15 03:28:41 -00 2025 Jul 15 04:39:19.099439 kernel: KASLR enabled Jul 15 04:39:19.099443 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jul 15 04:39:19.099448 kernel: printk: legacy bootconsole [pl11] enabled Jul 15 04:39:19.099452 kernel: efi: EFI v2.7 by EDK II Jul 15 04:39:19.099457 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e018 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Jul 15 04:39:19.099461 kernel: random: crng init done Jul 15 04:39:19.099465 kernel: secureboot: Secure boot disabled Jul 15 04:39:19.099469 kernel: ACPI: Early table checksum verification disabled Jul 15 04:39:19.099473 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jul 15 04:39:19.099477 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 04:39:19.099481 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 04:39:19.099486 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jul 15 04:39:19.099491 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 04:39:19.099495 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 04:39:19.099500 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 04:39:19.099504 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 04:39:19.099509 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 04:39:19.099513 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 04:39:19.099517 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jul 15 04:39:19.099522 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 15 04:39:19.099526 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jul 15 04:39:19.099530 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 15 04:39:19.099535 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jul 15 04:39:19.099539 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Jul 15 04:39:19.099543 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Jul 15 04:39:19.099547 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jul 15 04:39:19.099551 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jul 15 04:39:19.099556 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jul 15 04:39:19.099561 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jul 15 04:39:19.099565 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jul 15 04:39:19.099569 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jul 15 04:39:19.099573 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jul 15 04:39:19.099577 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jul 15 04:39:19.099582 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jul 15 04:39:19.099586 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Jul 15 04:39:19.099590 kernel: NODE_DATA(0) allocated [mem 0x1bf7fda00-0x1bf804fff] Jul 15 04:39:19.099594 kernel: Zone ranges: Jul 15 04:39:19.099599 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jul 15 04:39:19.099606 kernel: DMA32 empty Jul 15 04:39:19.099610 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jul 15 04:39:19.099614 kernel: Device empty Jul 15 04:39:19.099619 kernel: Movable zone start for each node Jul 15 04:39:19.099623 kernel: Early memory node ranges Jul 15 04:39:19.099628 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jul 15 04:39:19.099633 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Jul 15 04:39:19.099637 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Jul 15 04:39:19.099641 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Jul 15 04:39:19.099646 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jul 15 04:39:19.099650 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jul 15 04:39:19.099654 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jul 15 04:39:19.099659 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jul 15 04:39:19.099663 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jul 15 04:39:19.099667 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jul 15 04:39:19.099671 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jul 15 04:39:19.099676 kernel: cma: Reserved 16 MiB at 0x000000003d400000 on node -1 Jul 15 04:39:19.099681 kernel: psci: probing for conduit method from ACPI. Jul 15 04:39:19.099685 kernel: psci: PSCIv1.1 detected in firmware. Jul 15 04:39:19.099690 kernel: psci: Using standard PSCI v0.2 function IDs Jul 15 04:39:19.099694 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 15 04:39:19.099698 kernel: psci: SMC Calling Convention v1.4 Jul 15 04:39:19.099703 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jul 15 04:39:19.099707 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jul 15 04:39:19.099711 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 15 04:39:19.099716 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 15 04:39:19.099720 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 15 04:39:19.099725 kernel: Detected PIPT I-cache on CPU0 Jul 15 04:39:19.099730 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Jul 15 04:39:19.099734 kernel: CPU features: detected: GIC system register CPU interface Jul 15 04:39:19.099739 kernel: CPU features: detected: Spectre-v4 Jul 15 04:39:19.099743 kernel: CPU features: detected: Spectre-BHB Jul 15 04:39:19.099747 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 15 04:39:19.099752 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 15 04:39:19.099756 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Jul 15 04:39:19.099761 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 15 04:39:19.099765 kernel: alternatives: applying boot alternatives Jul 15 04:39:19.099770 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=71133d47dc7355ed63f3db64861b54679726ebf08c2975c3bf327e76b39a3acd Jul 15 04:39:19.099775 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 04:39:19.099780 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 15 04:39:19.099785 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 15 04:39:19.099789 kernel: Fallback order for Node 0: 0 Jul 15 04:39:19.099793 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Jul 15 04:39:19.099798 kernel: Policy zone: Normal Jul 15 04:39:19.099802 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 04:39:19.099806 kernel: software IO TLB: area num 2. Jul 15 04:39:19.099811 kernel: software IO TLB: mapped [mem 0x0000000036210000-0x000000003a210000] (64MB) Jul 15 04:39:19.099815 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 15 04:39:19.099820 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 15 04:39:19.099825 kernel: rcu: RCU event tracing is enabled. Jul 15 04:39:19.099830 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 15 04:39:19.099834 kernel: Trampoline variant of Tasks RCU enabled. Jul 15 04:39:19.099839 kernel: Tracing variant of Tasks RCU enabled. Jul 15 04:39:19.099843 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 04:39:19.099848 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 15 04:39:19.099852 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 04:39:19.099857 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 04:39:19.099861 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 15 04:39:19.099865 kernel: GICv3: 960 SPIs implemented Jul 15 04:39:19.099870 kernel: GICv3: 0 Extended SPIs implemented Jul 15 04:39:19.099874 kernel: Root IRQ handler: gic_handle_irq Jul 15 04:39:19.099878 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jul 15 04:39:19.099883 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Jul 15 04:39:19.099888 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jul 15 04:39:19.099892 kernel: ITS: No ITS available, not enabling LPIs Jul 15 04:39:19.099897 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 15 04:39:19.099901 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Jul 15 04:39:19.099905 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 15 04:39:19.099910 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Jul 15 04:39:19.099914 kernel: Console: colour dummy device 80x25 Jul 15 04:39:19.099919 kernel: printk: legacy console [tty1] enabled Jul 15 04:39:19.099924 kernel: ACPI: Core revision 20240827 Jul 15 04:39:19.099928 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Jul 15 04:39:19.099934 kernel: pid_max: default: 32768 minimum: 301 Jul 15 04:39:19.099938 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 15 04:39:19.099943 kernel: landlock: Up and running. Jul 15 04:39:19.099947 kernel: SELinux: Initializing. Jul 15 04:39:19.099952 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 04:39:19.099960 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 04:39:19.099965 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x1a0000e, misc 0x31e1 Jul 15 04:39:19.099970 kernel: Hyper-V: Host Build 10.0.26100.1261-1-0 Jul 15 04:39:19.099975 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 15 04:39:19.099979 kernel: rcu: Hierarchical SRCU implementation. Jul 15 04:39:19.099984 kernel: rcu: Max phase no-delay instances is 400. Jul 15 04:39:19.099990 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 15 04:39:19.099994 kernel: Remapping and enabling EFI services. Jul 15 04:39:19.099999 kernel: smp: Bringing up secondary CPUs ... Jul 15 04:39:19.100004 kernel: Detected PIPT I-cache on CPU1 Jul 15 04:39:19.100009 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jul 15 04:39:19.100014 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Jul 15 04:39:19.100019 kernel: smp: Brought up 1 node, 2 CPUs Jul 15 04:39:19.100024 kernel: SMP: Total of 2 processors activated. Jul 15 04:39:19.100028 kernel: CPU: All CPU(s) started at EL1 Jul 15 04:39:19.100033 kernel: CPU features: detected: 32-bit EL0 Support Jul 15 04:39:19.100038 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jul 15 04:39:19.100043 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 15 04:39:19.100048 kernel: CPU features: detected: Common not Private translations Jul 15 04:39:19.100052 kernel: CPU features: detected: CRC32 instructions Jul 15 04:39:19.100058 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Jul 15 04:39:19.100063 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 15 04:39:19.100067 kernel: CPU features: detected: LSE atomic instructions Jul 15 04:39:19.100072 kernel: CPU features: detected: Privileged Access Never Jul 15 04:39:19.100077 kernel: CPU features: detected: Speculation barrier (SB) Jul 15 04:39:19.100082 kernel: CPU features: detected: TLB range maintenance instructions Jul 15 04:39:19.100086 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 15 04:39:19.100091 kernel: CPU features: detected: Scalable Vector Extension Jul 15 04:39:19.100096 kernel: alternatives: applying system-wide alternatives Jul 15 04:39:19.100101 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jul 15 04:39:19.100106 kernel: SVE: maximum available vector length 16 bytes per vector Jul 15 04:39:19.100111 kernel: SVE: default vector length 16 bytes per vector Jul 15 04:39:19.100116 kernel: Memory: 3959156K/4194160K available (11136K kernel code, 2436K rwdata, 9056K rodata, 39424K init, 1038K bss, 213816K reserved, 16384K cma-reserved) Jul 15 04:39:19.100121 kernel: devtmpfs: initialized Jul 15 04:39:19.100126 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 04:39:19.100130 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 15 04:39:19.100135 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 15 04:39:19.100140 kernel: 0 pages in range for non-PLT usage Jul 15 04:39:19.100145 kernel: 508448 pages in range for PLT usage Jul 15 04:39:19.100150 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 04:39:19.100155 kernel: SMBIOS 3.1.0 present. Jul 15 04:39:19.100160 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jul 15 04:39:19.100164 kernel: DMI: Memory slots populated: 2/2 Jul 15 04:39:19.100169 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 04:39:19.100174 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 15 04:39:19.100179 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 15 04:39:19.100184 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 15 04:39:19.100189 kernel: audit: initializing netlink subsys (disabled) Jul 15 04:39:19.100194 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Jul 15 04:39:19.100199 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 04:39:19.100203 kernel: cpuidle: using governor menu Jul 15 04:39:19.100208 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 15 04:39:19.100213 kernel: ASID allocator initialised with 32768 entries Jul 15 04:39:19.100217 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 04:39:19.100222 kernel: Serial: AMBA PL011 UART driver Jul 15 04:39:19.100227 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 04:39:19.100232 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 15 04:39:19.100237 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 15 04:39:19.100242 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 15 04:39:19.100247 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 04:39:19.100251 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 15 04:39:19.100256 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 15 04:39:19.100261 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 15 04:39:19.100266 kernel: ACPI: Added _OSI(Module Device) Jul 15 04:39:19.100270 kernel: ACPI: Added _OSI(Processor Device) Jul 15 04:39:19.100276 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 04:39:19.100292 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 15 04:39:19.100297 kernel: ACPI: Interpreter enabled Jul 15 04:39:19.100301 kernel: ACPI: Using GIC for interrupt routing Jul 15 04:39:19.100306 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jul 15 04:39:19.100311 kernel: printk: legacy console [ttyAMA0] enabled Jul 15 04:39:19.100316 kernel: printk: legacy bootconsole [pl11] disabled Jul 15 04:39:19.100320 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jul 15 04:39:19.100325 kernel: ACPI: CPU0 has been hot-added Jul 15 04:39:19.100331 kernel: ACPI: CPU1 has been hot-added Jul 15 04:39:19.100336 kernel: iommu: Default domain type: Translated Jul 15 04:39:19.100341 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 15 04:39:19.100345 kernel: efivars: Registered efivars operations Jul 15 04:39:19.100350 kernel: vgaarb: loaded Jul 15 04:39:19.100355 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 15 04:39:19.100359 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 04:39:19.100364 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 04:39:19.100369 kernel: pnp: PnP ACPI init Jul 15 04:39:19.100374 kernel: pnp: PnP ACPI: found 0 devices Jul 15 04:39:19.100379 kernel: NET: Registered PF_INET protocol family Jul 15 04:39:19.100384 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 15 04:39:19.100389 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 15 04:39:19.100394 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 04:39:19.100398 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 15 04:39:19.100403 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 15 04:39:19.100408 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 15 04:39:19.100412 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 04:39:19.100418 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 04:39:19.100423 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 04:39:19.100427 kernel: PCI: CLS 0 bytes, default 64 Jul 15 04:39:19.100432 kernel: kvm [1]: HYP mode not available Jul 15 04:39:19.100437 kernel: Initialise system trusted keyrings Jul 15 04:39:19.100442 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 15 04:39:19.100446 kernel: Key type asymmetric registered Jul 15 04:39:19.100451 kernel: Asymmetric key parser 'x509' registered Jul 15 04:39:19.100456 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 15 04:39:19.100461 kernel: io scheduler mq-deadline registered Jul 15 04:39:19.100466 kernel: io scheduler kyber registered Jul 15 04:39:19.100470 kernel: io scheduler bfq registered Jul 15 04:39:19.100475 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 04:39:19.100480 kernel: thunder_xcv, ver 1.0 Jul 15 04:39:19.100485 kernel: thunder_bgx, ver 1.0 Jul 15 04:39:19.100489 kernel: nicpf, ver 1.0 Jul 15 04:39:19.100494 kernel: nicvf, ver 1.0 Jul 15 04:39:19.100605 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 15 04:39:19.100659 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-15T04:39:18 UTC (1752554358) Jul 15 04:39:19.100665 kernel: efifb: probing for efifb Jul 15 04:39:19.100670 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 15 04:39:19.100675 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 15 04:39:19.100680 kernel: efifb: scrolling: redraw Jul 15 04:39:19.100684 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 15 04:39:19.100689 kernel: Console: switching to colour frame buffer device 128x48 Jul 15 04:39:19.100694 kernel: fb0: EFI VGA frame buffer device Jul 15 04:39:19.100700 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jul 15 04:39:19.100705 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 15 04:39:19.100709 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 15 04:39:19.100714 kernel: NET: Registered PF_INET6 protocol family Jul 15 04:39:19.100719 kernel: watchdog: NMI not fully supported Jul 15 04:39:19.100723 kernel: watchdog: Hard watchdog permanently disabled Jul 15 04:39:19.100728 kernel: Segment Routing with IPv6 Jul 15 04:39:19.100733 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 04:39:19.100738 kernel: NET: Registered PF_PACKET protocol family Jul 15 04:39:19.100743 kernel: Key type dns_resolver registered Jul 15 04:39:19.100748 kernel: registered taskstats version 1 Jul 15 04:39:19.100753 kernel: Loading compiled-in X.509 certificates Jul 15 04:39:19.100757 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: b5c59c413839929aea5bd4b52ae6eaff0e245cd2' Jul 15 04:39:19.100762 kernel: Demotion targets for Node 0: null Jul 15 04:39:19.100767 kernel: Key type .fscrypt registered Jul 15 04:39:19.100771 kernel: Key type fscrypt-provisioning registered Jul 15 04:39:19.100776 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 15 04:39:19.100781 kernel: ima: Allocated hash algorithm: sha1 Jul 15 04:39:19.100786 kernel: ima: No architecture policies found Jul 15 04:39:19.100791 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 15 04:39:19.100796 kernel: clk: Disabling unused clocks Jul 15 04:39:19.100800 kernel: PM: genpd: Disabling unused power domains Jul 15 04:39:19.100805 kernel: Warning: unable to open an initial console. Jul 15 04:39:19.100810 kernel: Freeing unused kernel memory: 39424K Jul 15 04:39:19.100815 kernel: Run /init as init process Jul 15 04:39:19.100819 kernel: with arguments: Jul 15 04:39:19.100824 kernel: /init Jul 15 04:39:19.100829 kernel: with environment: Jul 15 04:39:19.100834 kernel: HOME=/ Jul 15 04:39:19.100839 kernel: TERM=linux Jul 15 04:39:19.100843 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 04:39:19.100849 systemd[1]: Successfully made /usr/ read-only. Jul 15 04:39:19.100856 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 04:39:19.100862 systemd[1]: Detected virtualization microsoft. Jul 15 04:39:19.100868 systemd[1]: Detected architecture arm64. Jul 15 04:39:19.100872 systemd[1]: Running in initrd. Jul 15 04:39:19.100877 systemd[1]: No hostname configured, using default hostname. Jul 15 04:39:19.100883 systemd[1]: Hostname set to . Jul 15 04:39:19.100888 systemd[1]: Initializing machine ID from random generator. Jul 15 04:39:19.100893 systemd[1]: Queued start job for default target initrd.target. Jul 15 04:39:19.100898 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 04:39:19.100903 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 04:39:19.100909 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 15 04:39:19.100915 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 04:39:19.100920 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 15 04:39:19.100926 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 15 04:39:19.100932 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 15 04:39:19.100937 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 15 04:39:19.100942 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 04:39:19.100948 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 04:39:19.100958 systemd[1]: Reached target paths.target - Path Units. Jul 15 04:39:19.100963 systemd[1]: Reached target slices.target - Slice Units. Jul 15 04:39:19.100968 systemd[1]: Reached target swap.target - Swaps. Jul 15 04:39:19.100973 systemd[1]: Reached target timers.target - Timer Units. Jul 15 04:39:19.100978 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 04:39:19.100984 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 04:39:19.100989 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 15 04:39:19.100994 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 15 04:39:19.101000 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 04:39:19.101005 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 04:39:19.101010 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 04:39:19.101016 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 04:39:19.101021 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 15 04:39:19.101026 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 04:39:19.101031 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 15 04:39:19.101036 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 15 04:39:19.101042 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 04:39:19.101048 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 04:39:19.101053 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 04:39:19.101068 systemd-journald[224]: Collecting audit messages is disabled. Jul 15 04:39:19.101083 systemd-journald[224]: Journal started Jul 15 04:39:19.101097 systemd-journald[224]: Runtime Journal (/run/log/journal/a041f05a053b443db90e30285126d40e) is 8M, max 78.5M, 70.5M free. Jul 15 04:39:19.119184 systemd-modules-load[226]: Inserted module 'overlay' Jul 15 04:39:19.124904 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 04:39:19.143328 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 04:39:19.143375 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 04:39:19.151762 systemd-modules-load[226]: Inserted module 'br_netfilter' Jul 15 04:39:19.159356 kernel: Bridge firewalling registered Jul 15 04:39:19.155872 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 15 04:39:19.161047 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 04:39:19.170886 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 04:39:19.180120 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 04:39:19.185226 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:39:19.201378 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 15 04:39:19.216730 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 04:39:19.227964 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 04:39:19.249263 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 04:39:19.257074 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 04:39:19.274166 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 04:39:19.274753 systemd-tmpfiles[249]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 15 04:39:19.286840 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 04:39:19.300762 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 04:39:19.313903 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 15 04:39:19.329424 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 04:39:19.341186 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 04:39:19.366644 dracut-cmdline[262]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=71133d47dc7355ed63f3db64861b54679726ebf08c2975c3bf327e76b39a3acd Jul 15 04:39:19.400834 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 04:39:19.411147 systemd-resolved[263]: Positive Trust Anchors: Jul 15 04:39:19.411155 systemd-resolved[263]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 04:39:19.411175 systemd-resolved[263]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 04:39:19.412823 systemd-resolved[263]: Defaulting to hostname 'linux'. Jul 15 04:39:19.415223 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 04:39:19.425479 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 04:39:19.530305 kernel: SCSI subsystem initialized Jul 15 04:39:19.536293 kernel: Loading iSCSI transport class v2.0-870. Jul 15 04:39:19.543446 kernel: iscsi: registered transport (tcp) Jul 15 04:39:19.556532 kernel: iscsi: registered transport (qla4xxx) Jul 15 04:39:19.556545 kernel: QLogic iSCSI HBA Driver Jul 15 04:39:19.571238 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 04:39:19.594917 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 04:39:19.601735 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 04:39:19.652383 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 15 04:39:19.657798 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 15 04:39:19.718294 kernel: raid6: neonx8 gen() 18524 MB/s Jul 15 04:39:19.735290 kernel: raid6: neonx4 gen() 18552 MB/s Jul 15 04:39:19.754288 kernel: raid6: neonx2 gen() 17093 MB/s Jul 15 04:39:19.774290 kernel: raid6: neonx1 gen() 15019 MB/s Jul 15 04:39:19.793289 kernel: raid6: int64x8 gen() 10530 MB/s Jul 15 04:39:19.812289 kernel: raid6: int64x4 gen() 10612 MB/s Jul 15 04:39:19.832287 kernel: raid6: int64x2 gen() 8982 MB/s Jul 15 04:39:19.853709 kernel: raid6: int64x1 gen() 7013 MB/s Jul 15 04:39:19.853723 kernel: raid6: using algorithm neonx4 gen() 18552 MB/s Jul 15 04:39:19.876721 kernel: raid6: .... xor() 15152 MB/s, rmw enabled Jul 15 04:39:19.876733 kernel: raid6: using neon recovery algorithm Jul 15 04:39:19.882287 kernel: xor: measuring software checksum speed Jul 15 04:39:19.887205 kernel: 8regs : 27302 MB/sec Jul 15 04:39:19.887214 kernel: 32regs : 28810 MB/sec Jul 15 04:39:19.889974 kernel: arm64_neon : 37432 MB/sec Jul 15 04:39:19.893840 kernel: xor: using function: arm64_neon (37432 MB/sec) Jul 15 04:39:19.931293 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 15 04:39:19.936931 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 15 04:39:19.947424 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 04:39:19.970456 systemd-udevd[474]: Using default interface naming scheme 'v255'. Jul 15 04:39:19.974931 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 04:39:19.988472 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 15 04:39:20.029981 dracut-pre-trigger[484]: rd.md=0: removing MD RAID activation Jul 15 04:39:20.050892 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 04:39:20.057701 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 04:39:20.109950 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 04:39:20.122506 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 15 04:39:20.188303 kernel: hv_vmbus: Vmbus version:5.3 Jul 15 04:39:20.202334 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 15 04:39:20.202395 kernel: hv_vmbus: registering driver hv_netvsc Jul 15 04:39:20.202409 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 04:39:20.202507 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:39:20.231767 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 15 04:39:20.231795 kernel: hv_vmbus: registering driver hid_hyperv Jul 15 04:39:20.231804 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 15 04:39:20.231830 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 04:39:20.239103 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 04:39:20.263665 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 04:39:20.303918 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jul 15 04:39:20.303938 kernel: hv_vmbus: registering driver hv_storvsc Jul 15 04:39:20.303946 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 15 04:39:20.304074 kernel: PTP clock support registered Jul 15 04:39:20.304081 kernel: scsi host1: storvsc_host_t Jul 15 04:39:20.304163 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jul 15 04:39:20.304170 kernel: scsi host0: storvsc_host_t Jul 15 04:39:20.298389 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 04:39:20.327642 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 15 04:39:20.327679 kernel: hv_netvsc 002248b9-4b95-0022-48b9-4b95002248b9 eth0: VF slot 1 added Jul 15 04:39:20.327843 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jul 15 04:39:20.298471 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:39:20.342183 kernel: hv_utils: Registering HyperV Utility Driver Jul 15 04:39:20.342202 kernel: hv_vmbus: registering driver hv_utils Jul 15 04:39:20.325686 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 04:39:20.701347 kernel: hv_utils: TimeSync IC version 4.0 Jul 15 04:39:20.701369 kernel: hv_utils: Shutdown IC version 3.2 Jul 15 04:39:20.701378 kernel: hv_utils: Heartbeat IC version 3.0 Jul 15 04:39:20.701386 kernel: hv_vmbus: registering driver hv_pci Jul 15 04:39:20.701393 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 15 04:39:20.702801 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 15 04:39:20.702812 kernel: hv_pci b2eca0a2-15c2-4bb5-84fa-0f1ba3025f90: PCI VMBus probing: Using version 0x10004 Jul 15 04:39:20.704842 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 15 04:39:20.704937 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 15 04:39:20.705000 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 15 04:39:20.705060 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 15 04:39:20.705118 kernel: hv_pci b2eca0a2-15c2-4bb5-84fa-0f1ba3025f90: PCI host bridge to bus 15c2:00 Jul 15 04:39:20.705175 kernel: pci_bus 15c2:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jul 15 04:39:20.705254 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 15 04:39:20.705313 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#116 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 15 04:39:20.705371 kernel: pci_bus 15c2:00: No busn resource found for root bus, will use [bus 00-ff] Jul 15 04:39:20.705425 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#126 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 15 04:39:20.705475 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 15 04:39:20.705535 kernel: pci 15c2:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Jul 15 04:39:20.705553 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 04:39:20.613768 systemd-resolved[263]: Clock change detected. Flushing caches. Jul 15 04:39:20.724993 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 15 04:39:20.725139 kernel: pci 15c2:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 15 04:39:20.725156 kernel: pci 15c2:00:02.0: enabling Extended Tags Jul 15 04:39:20.727869 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:39:20.761549 kernel: pci 15c2:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 15c2:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Jul 15 04:39:20.761763 kernel: pci_bus 15c2:00: busn_res: [bus 00-ff] end is updated to 00 Jul 15 04:39:20.761859 kernel: pci 15c2:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Jul 15 04:39:20.774906 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#94 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 15 04:39:20.803740 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 15 04:39:20.838034 kernel: mlx5_core 15c2:00:02.0: enabling device (0000 -> 0002) Jul 15 04:39:20.847569 kernel: mlx5_core 15c2:00:02.0: PTM is not supported by PCIe Jul 15 04:39:20.847682 kernel: mlx5_core 15c2:00:02.0: firmware version: 16.30.5006 Jul 15 04:39:21.026696 kernel: hv_netvsc 002248b9-4b95-0022-48b9-4b95002248b9 eth0: VF registering: eth1 Jul 15 04:39:21.026925 kernel: mlx5_core 15c2:00:02.0 eth1: joined to eth0 Jul 15 04:39:21.032743 kernel: mlx5_core 15c2:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jul 15 04:39:21.045731 kernel: mlx5_core 15c2:00:02.0 enP5570s1: renamed from eth1 Jul 15 04:39:21.238969 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 15 04:39:21.277542 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 15 04:39:21.313465 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 15 04:39:21.323879 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 15 04:39:21.335802 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 15 04:39:21.349396 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 15 04:39:21.375500 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 15 04:39:21.386220 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 04:39:21.395447 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 04:39:21.431279 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#107 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 15 04:39:21.415705 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 04:39:21.422853 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 15 04:39:21.444291 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 04:39:21.455766 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#23 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 15 04:39:21.465783 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 04:39:21.469081 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 15 04:39:22.455506 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 15 04:39:22.468443 disk-uuid[655]: The operation has completed successfully. Jul 15 04:39:22.473213 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 04:39:22.535283 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 04:39:22.537443 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 15 04:39:22.573674 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 15 04:39:22.593860 sh[821]: Success Jul 15 04:39:22.625368 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 04:39:22.625428 kernel: device-mapper: uevent: version 1.0.3 Jul 15 04:39:22.630061 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 15 04:39:22.639753 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 15 04:39:22.808274 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 15 04:39:22.817685 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 15 04:39:22.837774 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 15 04:39:22.857745 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 15 04:39:22.864734 kernel: BTRFS: device fsid a7b7592d-2d1d-4236-b04f-dc58147b4692 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (839) Jul 15 04:39:22.874652 kernel: BTRFS info (device dm-0): first mount of filesystem a7b7592d-2d1d-4236-b04f-dc58147b4692 Jul 15 04:39:22.874686 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 15 04:39:22.877726 kernel: BTRFS info (device dm-0): using free-space-tree Jul 15 04:39:23.118467 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 15 04:39:23.123310 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 15 04:39:23.130408 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 15 04:39:23.131150 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 15 04:39:23.155422 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 15 04:39:23.183738 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (868) Jul 15 04:39:23.183787 kernel: BTRFS info (device sda6): first mount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:39:23.192150 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 04:39:23.195112 kernel: BTRFS info (device sda6): using free-space-tree Jul 15 04:39:23.217755 kernel: BTRFS info (device sda6): last unmount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:39:23.218536 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 15 04:39:23.225901 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 15 04:39:23.276335 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 04:39:23.287404 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 04:39:23.314488 systemd-networkd[1008]: lo: Link UP Jul 15 04:39:23.314498 systemd-networkd[1008]: lo: Gained carrier Jul 15 04:39:23.315959 systemd-networkd[1008]: Enumeration completed Jul 15 04:39:23.317129 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 04:39:23.317388 systemd-networkd[1008]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 04:39:23.317392 systemd-networkd[1008]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 04:39:23.321936 systemd[1]: Reached target network.target - Network. Jul 15 04:39:23.403734 kernel: mlx5_core 15c2:00:02.0 enP5570s1: Link up Jul 15 04:39:23.447033 kernel: hv_netvsc 002248b9-4b95-0022-48b9-4b95002248b9 eth0: Data path switched to VF: enP5570s1 Jul 15 04:39:23.447169 systemd-networkd[1008]: enP5570s1: Link UP Jul 15 04:39:23.447215 systemd-networkd[1008]: eth0: Link UP Jul 15 04:39:23.447346 systemd-networkd[1008]: eth0: Gained carrier Jul 15 04:39:23.447355 systemd-networkd[1008]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 04:39:23.470305 systemd-networkd[1008]: enP5570s1: Gained carrier Jul 15 04:39:23.480748 systemd-networkd[1008]: eth0: DHCPv4 address 10.200.20.21/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 15 04:39:24.068471 ignition[936]: Ignition 2.21.0 Jul 15 04:39:24.068483 ignition[936]: Stage: fetch-offline Jul 15 04:39:24.068557 ignition[936]: no configs at "/usr/lib/ignition/base.d" Jul 15 04:39:24.072798 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 04:39:24.068563 ignition[936]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 15 04:39:24.079834 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 15 04:39:24.068666 ignition[936]: parsed url from cmdline: "" Jul 15 04:39:24.068668 ignition[936]: no config URL provided Jul 15 04:39:24.068671 ignition[936]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 04:39:24.068676 ignition[936]: no config at "/usr/lib/ignition/user.ign" Jul 15 04:39:24.068679 ignition[936]: failed to fetch config: resource requires networking Jul 15 04:39:24.071138 ignition[936]: Ignition finished successfully Jul 15 04:39:24.114743 ignition[1019]: Ignition 2.21.0 Jul 15 04:39:24.114749 ignition[1019]: Stage: fetch Jul 15 04:39:24.114948 ignition[1019]: no configs at "/usr/lib/ignition/base.d" Jul 15 04:39:24.114961 ignition[1019]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 15 04:39:24.115045 ignition[1019]: parsed url from cmdline: "" Jul 15 04:39:24.115047 ignition[1019]: no config URL provided Jul 15 04:39:24.115050 ignition[1019]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 04:39:24.115056 ignition[1019]: no config at "/usr/lib/ignition/user.ign" Jul 15 04:39:24.115089 ignition[1019]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 15 04:39:24.218175 ignition[1019]: GET result: OK Jul 15 04:39:24.218280 ignition[1019]: config has been read from IMDS userdata Jul 15 04:39:24.218307 ignition[1019]: parsing config with SHA512: cf7693dfacf7c1936f00fb1abd8a7209cc459314c9e88d2edc891fa07f8bd428ebd7316bff80d7c8bc59952624949620b23c63348f1ed94d90df6d0031454547 Jul 15 04:39:24.225065 unknown[1019]: fetched base config from "system" Jul 15 04:39:24.225077 unknown[1019]: fetched base config from "system" Jul 15 04:39:24.225296 ignition[1019]: fetch: fetch complete Jul 15 04:39:24.225081 unknown[1019]: fetched user config from "azure" Jul 15 04:39:24.225300 ignition[1019]: fetch: fetch passed Jul 15 04:39:24.227478 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 15 04:39:24.225339 ignition[1019]: Ignition finished successfully Jul 15 04:39:24.233615 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 15 04:39:24.273209 ignition[1025]: Ignition 2.21.0 Jul 15 04:39:24.275931 ignition[1025]: Stage: kargs Jul 15 04:39:24.276175 ignition[1025]: no configs at "/usr/lib/ignition/base.d" Jul 15 04:39:24.279925 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 15 04:39:24.276183 ignition[1025]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 15 04:39:24.288423 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 15 04:39:24.277127 ignition[1025]: kargs: kargs passed Jul 15 04:39:24.277176 ignition[1025]: Ignition finished successfully Jul 15 04:39:24.325475 ignition[1031]: Ignition 2.21.0 Jul 15 04:39:24.326238 ignition[1031]: Stage: disks Jul 15 04:39:24.329252 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 15 04:39:24.326430 ignition[1031]: no configs at "/usr/lib/ignition/base.d" Jul 15 04:39:24.335158 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 15 04:39:24.326439 ignition[1031]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 15 04:39:24.346006 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 15 04:39:24.327222 ignition[1031]: disks: disks passed Jul 15 04:39:24.355902 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 04:39:24.327274 ignition[1031]: Ignition finished successfully Jul 15 04:39:24.364406 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 04:39:24.373334 systemd[1]: Reached target basic.target - Basic System. Jul 15 04:39:24.382846 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 15 04:39:24.450580 systemd-fsck[1040]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jul 15 04:39:24.455819 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 15 04:39:24.462615 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 15 04:39:24.580832 systemd-networkd[1008]: enP5570s1: Gained IPv6LL Jul 15 04:39:24.633726 kernel: EXT4-fs (sda9): mounted filesystem 4818953b-9d82-47bd-ab58-d0aa5641a19a r/w with ordered data mode. Quota mode: none. Jul 15 04:39:24.634545 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 15 04:39:24.639537 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 15 04:39:24.660602 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 04:39:24.675307 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 15 04:39:24.697590 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1054) Jul 15 04:39:24.697622 kernel: BTRFS info (device sda6): first mount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:39:24.700156 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 15 04:39:24.711985 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 04:39:24.712013 kernel: BTRFS info (device sda6): using free-space-tree Jul 15 04:39:24.717184 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 04:39:24.717227 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 04:39:24.733579 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 04:39:24.739985 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 15 04:39:24.747850 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 15 04:39:24.964837 systemd-networkd[1008]: eth0: Gained IPv6LL Jul 15 04:39:25.062189 coreos-metadata[1056]: Jul 15 04:39:25.062 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 15 04:39:25.071534 coreos-metadata[1056]: Jul 15 04:39:25.071 INFO Fetch successful Jul 15 04:39:25.075787 coreos-metadata[1056]: Jul 15 04:39:25.075 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 15 04:39:25.094109 coreos-metadata[1056]: Jul 15 04:39:25.094 INFO Fetch successful Jul 15 04:39:25.107698 coreos-metadata[1056]: Jul 15 04:39:25.107 INFO wrote hostname ci-4396.0.0-n-9104e8bf1a to /sysroot/etc/hostname Jul 15 04:39:25.115502 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 15 04:39:25.285645 initrd-setup-root[1084]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 04:39:25.314854 initrd-setup-root[1091]: cut: /sysroot/etc/group: No such file or directory Jul 15 04:39:25.321594 initrd-setup-root[1098]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 04:39:25.327783 initrd-setup-root[1105]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 04:39:26.088376 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 15 04:39:26.095555 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 15 04:39:26.114444 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 15 04:39:26.125961 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 15 04:39:26.136240 kernel: BTRFS info (device sda6): last unmount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:39:26.148787 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 15 04:39:26.160455 ignition[1174]: INFO : Ignition 2.21.0 Jul 15 04:39:26.160455 ignition[1174]: INFO : Stage: mount Jul 15 04:39:26.167608 ignition[1174]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 04:39:26.167608 ignition[1174]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 15 04:39:26.167608 ignition[1174]: INFO : mount: mount passed Jul 15 04:39:26.167608 ignition[1174]: INFO : Ignition finished successfully Jul 15 04:39:26.165358 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 15 04:39:26.172929 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 15 04:39:26.200824 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 04:39:26.233151 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1185) Jul 15 04:39:26.233184 kernel: BTRFS info (device sda6): first mount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:39:26.237785 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 04:39:26.240941 kernel: BTRFS info (device sda6): using free-space-tree Jul 15 04:39:26.243455 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 04:39:26.266618 ignition[1202]: INFO : Ignition 2.21.0 Jul 15 04:39:26.266618 ignition[1202]: INFO : Stage: files Jul 15 04:39:26.266618 ignition[1202]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 04:39:26.266618 ignition[1202]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 15 04:39:26.283298 ignition[1202]: DEBUG : files: compiled without relabeling support, skipping Jul 15 04:39:26.283298 ignition[1202]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 04:39:26.283298 ignition[1202]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 04:39:26.299829 ignition[1202]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 04:39:26.299829 ignition[1202]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 04:39:26.299829 ignition[1202]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 04:39:26.293796 unknown[1202]: wrote ssh authorized keys file for user: core Jul 15 04:39:26.343179 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 15 04:39:26.353497 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 15 04:39:26.391565 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 15 04:39:26.512651 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 15 04:39:26.521608 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 15 04:39:26.521608 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 04:39:26.521608 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 04:39:26.521608 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 04:39:26.521608 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 04:39:26.521608 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 04:39:26.521608 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 04:39:26.521608 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 04:39:26.596040 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 04:39:26.596040 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 04:39:26.596040 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 15 04:39:26.596040 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 15 04:39:26.596040 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 15 04:39:26.596040 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 15 04:39:27.321010 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 15 04:39:27.546986 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 15 04:39:27.546986 ignition[1202]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 15 04:39:27.578865 ignition[1202]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 04:39:27.591908 ignition[1202]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 04:39:27.591908 ignition[1202]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 15 04:39:27.611806 ignition[1202]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 15 04:39:27.611806 ignition[1202]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 04:39:27.611806 ignition[1202]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 04:39:27.611806 ignition[1202]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 04:39:27.611806 ignition[1202]: INFO : files: files passed Jul 15 04:39:27.611806 ignition[1202]: INFO : Ignition finished successfully Jul 15 04:39:27.610005 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 15 04:39:27.618304 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 15 04:39:27.665674 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 15 04:39:27.674213 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 04:39:27.674293 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 15 04:39:27.705408 initrd-setup-root-after-ignition[1232]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 04:39:27.705408 initrd-setup-root-after-ignition[1232]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 15 04:39:27.723595 initrd-setup-root-after-ignition[1236]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 04:39:27.723280 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 04:39:27.737873 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 15 04:39:27.749846 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 15 04:39:27.790418 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 04:39:27.790510 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 15 04:39:27.802708 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 15 04:39:27.816650 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 15 04:39:27.828595 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 15 04:39:27.829325 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 15 04:39:27.862238 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 04:39:27.868513 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 15 04:39:27.902231 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 15 04:39:27.908319 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 04:39:27.918814 systemd[1]: Stopped target timers.target - Timer Units. Jul 15 04:39:27.927859 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 04:39:27.927975 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 04:39:27.941051 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 15 04:39:27.945400 systemd[1]: Stopped target basic.target - Basic System. Jul 15 04:39:27.955771 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 15 04:39:27.964588 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 04:39:27.973256 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 15 04:39:27.982280 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 15 04:39:27.991647 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 15 04:39:28.001745 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 04:39:28.012479 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 15 04:39:28.021790 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 15 04:39:28.033671 systemd[1]: Stopped target swap.target - Swaps. Jul 15 04:39:28.044671 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 04:39:28.044804 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 15 04:39:28.061836 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 15 04:39:28.068857 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 04:39:28.084038 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 15 04:39:28.089427 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 04:39:28.097398 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 04:39:28.097509 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 15 04:39:28.115644 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 04:39:28.115754 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 04:39:28.122662 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 04:39:28.122742 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 15 04:39:28.131349 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 15 04:39:28.201102 ignition[1256]: INFO : Ignition 2.21.0 Jul 15 04:39:28.201102 ignition[1256]: INFO : Stage: umount Jul 15 04:39:28.201102 ignition[1256]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 04:39:28.201102 ignition[1256]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 15 04:39:28.201102 ignition[1256]: INFO : umount: umount passed Jul 15 04:39:28.201102 ignition[1256]: INFO : Ignition finished successfully Jul 15 04:39:28.131423 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 15 04:39:28.148875 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 15 04:39:28.177507 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 15 04:39:28.198812 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 04:39:28.199027 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 04:39:28.212106 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 04:39:28.212210 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 04:39:28.227787 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 04:39:28.228616 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 04:39:28.228699 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 15 04:39:28.240786 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 04:39:28.240855 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 15 04:39:28.246411 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 04:39:28.246451 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 15 04:39:28.255707 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 15 04:39:28.255757 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 15 04:39:28.267030 systemd[1]: Stopped target network.target - Network. Jul 15 04:39:28.275295 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 04:39:28.275344 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 04:39:28.284345 systemd[1]: Stopped target paths.target - Path Units. Jul 15 04:39:28.292239 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 04:39:28.292291 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 04:39:28.303891 systemd[1]: Stopped target slices.target - Slice Units. Jul 15 04:39:28.313196 systemd[1]: Stopped target sockets.target - Socket Units. Jul 15 04:39:28.322224 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 04:39:28.322271 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 04:39:28.330392 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 04:39:28.330418 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 04:39:28.340214 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 04:39:28.340260 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 15 04:39:28.349493 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 15 04:39:28.349525 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 15 04:39:28.359315 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 15 04:39:28.369926 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 15 04:39:28.380810 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 04:39:28.380900 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 15 04:39:28.390100 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 15 04:39:28.584434 kernel: hv_netvsc 002248b9-4b95-0022-48b9-4b95002248b9 eth0: Data path switched from VF: enP5570s1 Jul 15 04:39:28.390268 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 04:39:28.390343 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 15 04:39:28.398231 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 15 04:39:28.398410 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 04:39:28.398489 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 15 04:39:28.406681 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 15 04:39:28.415216 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 04:39:28.415256 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 15 04:39:28.424614 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 15 04:39:28.440261 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 04:39:28.440348 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 04:39:28.448971 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 04:39:28.449013 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 04:39:28.467410 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 04:39:28.467468 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 15 04:39:28.472268 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 15 04:39:28.472308 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 04:39:28.485117 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 04:39:28.497946 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 04:39:28.498014 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 15 04:39:28.517153 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 04:39:28.521906 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 04:39:28.531337 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 04:39:28.531433 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 15 04:39:28.540565 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 04:39:28.540635 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 15 04:39:28.550838 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 04:39:28.550878 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 04:39:28.560489 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 04:39:28.560546 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 15 04:39:28.584618 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 04:39:28.584696 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 15 04:39:28.599643 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 04:39:28.599701 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 04:39:28.614936 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 04:39:28.615014 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 15 04:39:28.620250 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 15 04:39:28.633774 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 15 04:39:28.633842 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 04:39:28.645121 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 15 04:39:28.645167 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 04:39:28.659828 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 15 04:39:28.659879 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 04:39:28.889138 systemd-journald[224]: Received SIGTERM from PID 1 (systemd). Jul 15 04:39:28.672926 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 04:39:28.672966 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 04:39:28.679248 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 04:39:28.679290 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:39:28.698817 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 15 04:39:28.698862 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 15 04:39:28.698885 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 15 04:39:28.698910 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 04:39:28.699215 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 04:39:28.699305 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 15 04:39:28.709482 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 04:39:28.709578 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 15 04:39:28.717967 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 15 04:39:28.727646 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 15 04:39:28.771400 systemd[1]: Switching root. Jul 15 04:39:28.974788 systemd-journald[224]: Journal stopped Jul 15 04:39:32.783365 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 04:39:32.783385 kernel: SELinux: policy capability open_perms=1 Jul 15 04:39:32.783392 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 04:39:32.783398 kernel: SELinux: policy capability always_check_network=0 Jul 15 04:39:32.783404 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 04:39:32.783409 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 04:39:32.783415 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 04:39:32.783420 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 04:39:32.783426 kernel: SELinux: policy capability userspace_initial_context=0 Jul 15 04:39:32.783431 kernel: audit: type=1403 audit(1752554369.720:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 15 04:39:32.783438 systemd[1]: Successfully loaded SELinux policy in 171.769ms. Jul 15 04:39:32.783446 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.333ms. Jul 15 04:39:32.783453 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 04:39:32.783459 systemd[1]: Detected virtualization microsoft. Jul 15 04:39:32.783465 systemd[1]: Detected architecture arm64. Jul 15 04:39:32.783472 systemd[1]: Detected first boot. Jul 15 04:39:32.783478 systemd[1]: Hostname set to . Jul 15 04:39:32.783484 systemd[1]: Initializing machine ID from random generator. Jul 15 04:39:32.783490 zram_generator::config[1298]: No configuration found. Jul 15 04:39:32.783497 kernel: NET: Registered PF_VSOCK protocol family Jul 15 04:39:32.783503 systemd[1]: Populated /etc with preset unit settings. Jul 15 04:39:32.783509 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 15 04:39:32.783516 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 15 04:39:32.783523 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 15 04:39:32.783529 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 15 04:39:32.783535 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 15 04:39:32.783541 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 15 04:39:32.783547 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 15 04:39:32.783553 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 15 04:39:32.783560 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 15 04:39:32.783566 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 15 04:39:32.783572 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 15 04:39:32.783578 systemd[1]: Created slice user.slice - User and Session Slice. Jul 15 04:39:32.783585 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 04:39:32.783591 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 04:39:32.783597 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 15 04:39:32.783603 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 15 04:39:32.783610 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 15 04:39:32.783616 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 04:39:32.783623 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 15 04:39:32.783631 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 04:39:32.783637 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 04:39:32.783643 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 15 04:39:32.783650 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 15 04:39:32.783656 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 15 04:39:32.783663 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 15 04:39:32.783675 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 04:39:32.783681 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 04:39:32.783687 systemd[1]: Reached target slices.target - Slice Units. Jul 15 04:39:32.783693 systemd[1]: Reached target swap.target - Swaps. Jul 15 04:39:32.783699 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 15 04:39:32.783706 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 15 04:39:32.786758 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 15 04:39:32.786781 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 04:39:32.786789 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 04:39:32.786796 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 04:39:32.786803 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 15 04:39:32.786810 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 15 04:39:32.786819 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 15 04:39:32.786826 systemd[1]: Mounting media.mount - External Media Directory... Jul 15 04:39:32.786832 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 15 04:39:32.786838 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 15 04:39:32.786845 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 15 04:39:32.786853 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 04:39:32.786859 systemd[1]: Reached target machines.target - Containers. Jul 15 04:39:32.786866 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 15 04:39:32.786874 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 04:39:32.786880 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 04:39:32.786887 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 15 04:39:32.786893 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 04:39:32.786899 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 04:39:32.786906 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 04:39:32.786912 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 15 04:39:32.786919 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 04:39:32.786926 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 04:39:32.786934 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 15 04:39:32.786940 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 15 04:39:32.786947 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 15 04:39:32.786953 systemd[1]: Stopped systemd-fsck-usr.service. Jul 15 04:39:32.786959 kernel: fuse: init (API version 7.41) Jul 15 04:39:32.786965 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 04:39:32.786972 kernel: loop: module loaded Jul 15 04:39:32.786978 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 04:39:32.786985 kernel: ACPI: bus type drm_connector registered Jul 15 04:39:32.786991 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 04:39:32.786998 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 04:39:32.787028 systemd-journald[1402]: Collecting audit messages is disabled. Jul 15 04:39:32.787046 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 15 04:39:32.787054 systemd-journald[1402]: Journal started Jul 15 04:39:32.787069 systemd-journald[1402]: Runtime Journal (/run/log/journal/5d2b3191d4a24212832829ffd9490436) is 8M, max 78.5M, 70.5M free. Jul 15 04:39:31.975630 systemd[1]: Queued start job for default target multi-user.target. Jul 15 04:39:31.993234 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 15 04:39:31.993637 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 15 04:39:31.993946 systemd[1]: systemd-journald.service: Consumed 2.820s CPU time. Jul 15 04:39:32.815017 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 15 04:39:32.829488 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 04:39:32.843349 systemd[1]: verity-setup.service: Deactivated successfully. Jul 15 04:39:32.843408 systemd[1]: Stopped verity-setup.service. Jul 15 04:39:32.861435 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 04:39:32.864205 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 15 04:39:32.869968 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 15 04:39:32.875210 systemd[1]: Mounted media.mount - External Media Directory. Jul 15 04:39:32.880418 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 15 04:39:32.886180 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 15 04:39:32.891068 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 15 04:39:32.896765 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 15 04:39:32.903106 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 04:39:32.908773 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 04:39:32.908913 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 15 04:39:32.914603 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 04:39:32.914907 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 04:39:32.922630 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 04:39:32.922792 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 04:39:32.929670 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 04:39:32.929857 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 04:39:32.936924 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 04:39:32.937042 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 15 04:39:32.942637 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 04:39:32.942786 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 04:39:32.949128 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 04:39:32.954856 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 04:39:32.961274 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 15 04:39:32.967905 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 15 04:39:32.974623 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 04:39:32.989878 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 04:39:32.997449 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 15 04:39:33.007840 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 15 04:39:33.014315 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 04:39:33.014350 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 04:39:33.019400 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 15 04:39:33.025604 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 15 04:39:33.030189 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 04:39:33.039434 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 15 04:39:33.045840 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 15 04:39:33.051763 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 04:39:33.054845 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 15 04:39:33.062753 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 04:39:33.063607 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 04:39:33.070125 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 15 04:39:33.078855 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 04:39:33.086377 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 15 04:39:33.095427 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 15 04:39:33.101629 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 15 04:39:33.110447 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 15 04:39:33.122073 systemd-journald[1402]: Time spent on flushing to /var/log/journal/5d2b3191d4a24212832829ffd9490436 is 17.737ms for 943 entries. Jul 15 04:39:33.122073 systemd-journald[1402]: System Journal (/var/log/journal/5d2b3191d4a24212832829ffd9490436) is 8M, max 2.6G, 2.6G free. Jul 15 04:39:33.219079 systemd-journald[1402]: Received client request to flush runtime journal. Jul 15 04:39:33.219137 kernel: loop0: detected capacity change from 0 to 105936 Jul 15 04:39:33.116846 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 15 04:39:33.142492 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 04:39:33.221080 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 15 04:39:33.228105 systemd-tmpfiles[1439]: ACLs are not supported, ignoring. Jul 15 04:39:33.228115 systemd-tmpfiles[1439]: ACLs are not supported, ignoring. Jul 15 04:39:33.246758 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 04:39:33.255802 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 15 04:39:33.265160 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 04:39:33.266961 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 15 04:39:33.408970 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 15 04:39:33.414887 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 04:39:33.431090 systemd-tmpfiles[1454]: ACLs are not supported, ignoring. Jul 15 04:39:33.431356 systemd-tmpfiles[1454]: ACLs are not supported, ignoring. Jul 15 04:39:33.434166 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 04:39:33.549751 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 04:39:33.565739 kernel: loop1: detected capacity change from 0 to 203944 Jul 15 04:39:33.600748 kernel: loop2: detected capacity change from 0 to 28800 Jul 15 04:39:33.965287 kernel: loop3: detected capacity change from 0 to 134232 Jul 15 04:39:34.005537 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 15 04:39:34.013178 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 04:39:34.044071 systemd-udevd[1463]: Using default interface naming scheme 'v255'. Jul 15 04:39:34.213620 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 04:39:34.225619 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 04:39:34.274361 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 15 04:39:34.310217 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 15 04:39:34.311749 kernel: loop4: detected capacity change from 0 to 105936 Jul 15 04:39:34.325731 kernel: loop5: detected capacity change from 0 to 203944 Jul 15 04:39:34.338828 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 15 04:39:34.340762 kernel: loop6: detected capacity change from 0 to 28800 Jul 15 04:39:34.351842 kernel: loop7: detected capacity change from 0 to 134232 Jul 15 04:39:34.361082 (sd-merge)[1496]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 15 04:39:34.361478 (sd-merge)[1496]: Merged extensions into '/usr'. Jul 15 04:39:34.369376 systemd[1]: Reload requested from client PID 1437 ('systemd-sysext') (unit systemd-sysext.service)... Jul 15 04:39:34.369393 systemd[1]: Reloading... Jul 15 04:39:34.459747 zram_generator::config[1547]: No configuration found. Jul 15 04:39:34.481734 kernel: mousedev: PS/2 mouse device common for all mice Jul 15 04:39:34.481821 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 15 04:39:34.490785 systemd-networkd[1479]: lo: Link UP Jul 15 04:39:34.490792 systemd-networkd[1479]: lo: Gained carrier Jul 15 04:39:34.492276 systemd-networkd[1479]: Enumeration completed Jul 15 04:39:34.492555 systemd-networkd[1479]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 04:39:34.492562 systemd-networkd[1479]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 04:39:34.560422 kernel: hv_vmbus: registering driver hv_balloon Jul 15 04:39:34.560526 kernel: mlx5_core 15c2:00:02.0 enP5570s1: Link up Jul 15 04:39:34.560737 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 15 04:39:34.566526 kernel: hv_balloon: Memory hot add disabled on ARM64 Jul 15 04:39:34.589396 systemd-networkd[1479]: enP5570s1: Link UP Jul 15 04:39:34.589459 systemd-networkd[1479]: eth0: Link UP Jul 15 04:39:34.589462 systemd-networkd[1479]: eth0: Gained carrier Jul 15 04:39:34.589481 systemd-networkd[1479]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 04:39:34.589782 kernel: hv_vmbus: registering driver hyperv_fb Jul 15 04:39:34.589824 kernel: hv_netvsc 002248b9-4b95-0022-48b9-4b95002248b9 eth0: Data path switched to VF: enP5570s1 Jul 15 04:39:34.593958 systemd-networkd[1479]: enP5570s1: Gained carrier Jul 15 04:39:34.599779 systemd-networkd[1479]: eth0: DHCPv4 address 10.200.20.21/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 15 04:39:34.611550 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 04:39:34.634848 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 15 04:39:34.634940 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 15 04:39:34.654211 kernel: Console: switching to colour dummy device 80x25 Jul 15 04:39:34.656777 kernel: Console: switching to colour frame buffer device 128x48 Jul 15 04:39:34.734417 systemd[1]: Reloading finished in 364 ms. Jul 15 04:39:34.751156 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 04:39:34.757412 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 15 04:39:34.795381 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 15 04:39:34.804739 kernel: MACsec IEEE 802.1AE Jul 15 04:39:34.806757 systemd[1]: Starting ensure-sysext.service... Jul 15 04:39:34.812608 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 15 04:39:34.821004 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 15 04:39:34.828022 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 15 04:39:34.842018 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 04:39:34.851588 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 04:39:34.868992 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 15 04:39:34.878058 systemd[1]: Reload requested from client PID 1676 ('systemctl') (unit ensure-sysext.service)... Jul 15 04:39:34.878155 systemd[1]: Reloading... Jul 15 04:39:34.879476 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 15 04:39:34.879498 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 15 04:39:34.879693 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 04:39:34.879853 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 15 04:39:34.880263 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 04:39:34.880397 systemd-tmpfiles[1681]: ACLs are not supported, ignoring. Jul 15 04:39:34.880424 systemd-tmpfiles[1681]: ACLs are not supported, ignoring. Jul 15 04:39:34.882658 systemd-tmpfiles[1681]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 04:39:34.882663 systemd-tmpfiles[1681]: Skipping /boot Jul 15 04:39:34.891928 systemd-tmpfiles[1681]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 04:39:34.891940 systemd-tmpfiles[1681]: Skipping /boot Jul 15 04:39:34.949834 zram_generator::config[1721]: No configuration found. Jul 15 04:39:35.016370 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 04:39:35.096465 systemd[1]: Reloading finished in 218 ms. Jul 15 04:39:35.112677 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 15 04:39:35.118795 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 04:39:35.131900 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 04:39:35.137937 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 15 04:39:35.146135 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 04:39:35.148805 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 04:39:35.156929 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 04:39:35.164597 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 04:39:35.170502 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 04:39:35.170751 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 04:39:35.172923 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 15 04:39:35.181907 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 04:39:35.189088 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 15 04:39:35.196078 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:39:35.203377 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 04:39:35.204324 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 04:39:35.211162 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 04:39:35.211517 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 04:39:35.217644 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 04:39:35.218035 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 04:39:35.233821 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 04:39:35.237162 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 04:39:35.244966 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 04:39:35.254956 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 04:39:35.261140 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 04:39:35.261282 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 04:39:35.268919 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 15 04:39:35.275057 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 04:39:35.275195 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 04:39:35.280912 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 04:39:35.281053 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 04:39:35.287539 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 04:39:35.287667 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 04:39:35.297906 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 15 04:39:35.306828 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 04:39:35.307835 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 04:39:35.318033 systemd-resolved[1785]: Positive Trust Anchors: Jul 15 04:39:35.318286 systemd-resolved[1785]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 04:39:35.318309 systemd-resolved[1785]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 04:39:35.320758 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 04:39:35.324483 augenrules[1820]: No rules Jul 15 04:39:35.328108 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 04:39:35.335553 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 04:39:35.338451 systemd-resolved[1785]: Using system hostname 'ci-4396.0.0-n-9104e8bf1a'. Jul 15 04:39:35.342217 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 04:39:35.342264 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 04:39:35.342302 systemd[1]: Reached target time-set.target - System Time Set. Jul 15 04:39:35.348503 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 04:39:35.353493 systemd[1]: Finished ensure-sysext.service. Jul 15 04:39:35.356947 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 04:39:35.357106 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 04:39:35.362029 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 04:39:35.362163 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 04:39:35.368322 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 04:39:35.368451 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 04:39:35.373217 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 04:39:35.373346 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 04:39:35.379220 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 04:39:35.379355 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 04:39:35.389520 systemd[1]: Reached target network.target - Network. Jul 15 04:39:35.394009 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 04:39:35.399268 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 04:39:35.399330 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 04:39:35.774216 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 15 04:39:35.780282 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 04:39:36.101505 systemd-networkd[1479]: enP5570s1: Gained IPv6LL Jul 15 04:39:36.548828 systemd-networkd[1479]: eth0: Gained IPv6LL Jul 15 04:39:36.551166 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 15 04:39:36.557123 systemd[1]: Reached target network-online.target - Network is Online. Jul 15 04:39:37.684030 ldconfig[1432]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 04:39:37.695298 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 15 04:39:37.701580 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 15 04:39:37.719644 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 15 04:39:37.725265 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 04:39:37.730356 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 15 04:39:37.736211 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 15 04:39:37.741702 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 15 04:39:37.747522 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 15 04:39:37.753035 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 15 04:39:37.758659 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 04:39:37.758686 systemd[1]: Reached target paths.target - Path Units. Jul 15 04:39:37.763126 systemd[1]: Reached target timers.target - Timer Units. Jul 15 04:39:37.768308 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 15 04:39:37.775162 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 15 04:39:37.781555 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 15 04:39:37.787995 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 15 04:39:37.794614 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 15 04:39:37.809385 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 15 04:39:37.814258 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 15 04:39:37.819524 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 15 04:39:37.824195 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 04:39:37.828505 systemd[1]: Reached target basic.target - Basic System. Jul 15 04:39:37.833239 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 15 04:39:37.833262 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 15 04:39:37.835188 systemd[1]: Starting chronyd.service - NTP client/server... Jul 15 04:39:37.850833 systemd[1]: Starting containerd.service - containerd container runtime... Jul 15 04:39:37.858149 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 15 04:39:37.864387 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 15 04:39:37.870672 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 15 04:39:37.877930 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 15 04:39:37.883859 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 15 04:39:37.889951 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 15 04:39:37.892858 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jul 15 04:39:37.897320 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jul 15 04:39:37.899352 (chronyd)[1840]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 15 04:39:37.900772 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:39:37.905544 jq[1848]: false Jul 15 04:39:37.908870 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 15 04:39:37.915856 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 15 04:39:37.923898 KVP[1850]: KVP starting; pid is:1850 Jul 15 04:39:37.924862 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 15 04:39:37.927087 chronyd[1861]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 15 04:39:37.930117 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 15 04:39:37.937782 kernel: hv_utils: KVP IC version 4.0 Jul 15 04:39:37.937633 KVP[1850]: KVP LIC Version: 3.1 Jul 15 04:39:37.941029 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 15 04:39:37.946881 extend-filesystems[1849]: Found /dev/sda6 Jul 15 04:39:37.952086 chronyd[1861]: Timezone right/UTC failed leap second check, ignoring Jul 15 04:39:37.952229 chronyd[1861]: Loaded seccomp filter (level 2) Jul 15 04:39:37.952584 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 15 04:39:37.957385 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 15 04:39:37.957771 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 15 04:39:37.959566 systemd[1]: Starting update-engine.service - Update Engine... Jul 15 04:39:37.965329 extend-filesystems[1849]: Found /dev/sda9 Jul 15 04:39:37.974002 extend-filesystems[1849]: Checking size of /dev/sda9 Jul 15 04:39:37.972861 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 15 04:39:37.981979 systemd[1]: Started chronyd.service - NTP client/server. Jul 15 04:39:37.994351 jq[1878]: true Jul 15 04:39:37.995633 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 15 04:39:38.002214 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 04:39:38.002739 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 15 04:39:38.003916 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 04:39:38.004057 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 15 04:39:38.015107 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 04:39:38.015635 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 15 04:39:38.018527 update_engine[1869]: I20250715 04:39:38.018451 1869 main.cc:92] Flatcar Update Engine starting Jul 15 04:39:38.027190 extend-filesystems[1849]: Old size kept for /dev/sda9 Jul 15 04:39:38.033074 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 04:39:38.033223 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 15 04:39:38.050090 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 15 04:39:38.054859 jq[1887]: true Jul 15 04:39:38.054629 (ntainerd)[1889]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 15 04:39:38.111239 systemd-logind[1868]: New seat seat0. Jul 15 04:39:38.111951 tar[1884]: linux-arm64/helm Jul 15 04:39:38.116695 systemd-logind[1868]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jul 15 04:39:38.116880 systemd[1]: Started systemd-logind.service - User Login Management. Jul 15 04:39:38.171412 bash[1924]: Updated "/home/core/.ssh/authorized_keys" Jul 15 04:39:38.174004 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 15 04:39:38.186787 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 15 04:39:38.210624 sshd_keygen[1880]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 04:39:38.255875 dbus-daemon[1843]: [system] SELinux support is enabled Jul 15 04:39:38.256969 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 15 04:39:38.265588 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 15 04:39:38.270306 update_engine[1869]: I20250715 04:39:38.266083 1869 update_check_scheduler.cc:74] Next update check in 4m14s Jul 15 04:39:38.274344 dbus-daemon[1843]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 15 04:39:38.276885 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 15 04:39:38.282827 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 04:39:38.282855 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 15 04:39:38.291162 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 04:39:38.291522 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 15 04:39:38.301006 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 15 04:39:38.311481 coreos-metadata[1842]: Jul 15 04:39:38.310 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 15 04:39:38.312660 systemd[1]: Started update-engine.service - Update Engine. Jul 15 04:39:38.316993 coreos-metadata[1842]: Jul 15 04:39:38.314 INFO Fetch successful Jul 15 04:39:38.316993 coreos-metadata[1842]: Jul 15 04:39:38.315 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 15 04:39:38.318842 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 04:39:38.320587 coreos-metadata[1842]: Jul 15 04:39:38.320 INFO Fetch successful Jul 15 04:39:38.326184 coreos-metadata[1842]: Jul 15 04:39:38.320 INFO Fetching http://168.63.129.16/machine/36e5da92-551e-47f5-ac47-7bf44b8442aa/deea4a19%2Dfdc5%2D4178%2D96eb%2D5d10b8fc335e.%5Fci%2D4396.0.0%2Dn%2D9104e8bf1a?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 15 04:39:38.326184 coreos-metadata[1842]: Jul 15 04:39:38.322 INFO Fetch successful Jul 15 04:39:38.326184 coreos-metadata[1842]: Jul 15 04:39:38.323 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 15 04:39:38.321205 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 15 04:39:38.337644 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 15 04:39:38.340741 coreos-metadata[1842]: Jul 15 04:39:38.337 INFO Fetch successful Jul 15 04:39:38.349087 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 15 04:39:38.371415 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 15 04:39:38.380897 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 15 04:39:38.390386 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 15 04:39:38.398555 systemd[1]: Reached target getty.target - Login Prompts. Jul 15 04:39:38.408411 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 15 04:39:38.426708 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 15 04:39:38.434306 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 15 04:39:38.552267 locksmithd[2008]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 04:39:38.559324 tar[1884]: linux-arm64/LICENSE Jul 15 04:39:38.559646 tar[1884]: linux-arm64/README.md Jul 15 04:39:38.571301 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 15 04:39:38.655091 containerd[1889]: time="2025-07-15T04:39:38Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 15 04:39:38.657737 containerd[1889]: time="2025-07-15T04:39:38.657665428Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 15 04:39:38.668092 containerd[1889]: time="2025-07-15T04:39:38.667659796Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.632µs" Jul 15 04:39:38.668733 containerd[1889]: time="2025-07-15T04:39:38.668490444Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 15 04:39:38.668794 containerd[1889]: time="2025-07-15T04:39:38.668743860Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 15 04:39:38.668900 containerd[1889]: time="2025-07-15T04:39:38.668883836Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 15 04:39:38.668917 containerd[1889]: time="2025-07-15T04:39:38.668900660Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 15 04:39:38.668933 containerd[1889]: time="2025-07-15T04:39:38.668917908Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 04:39:38.668966 containerd[1889]: time="2025-07-15T04:39:38.668954620Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 04:39:38.668966 containerd[1889]: time="2025-07-15T04:39:38.668963492Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 04:39:38.669148 containerd[1889]: time="2025-07-15T04:39:38.669133132Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 04:39:38.669148 containerd[1889]: time="2025-07-15T04:39:38.669146484Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 04:39:38.669175 containerd[1889]: time="2025-07-15T04:39:38.669154020Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 04:39:38.669175 containerd[1889]: time="2025-07-15T04:39:38.669159620Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 15 04:39:38.669234 containerd[1889]: time="2025-07-15T04:39:38.669222324Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 15 04:39:38.669396 containerd[1889]: time="2025-07-15T04:39:38.669374916Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 04:39:38.669424 containerd[1889]: time="2025-07-15T04:39:38.669399492Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 04:39:38.669424 containerd[1889]: time="2025-07-15T04:39:38.669406236Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 15 04:39:38.669460 containerd[1889]: time="2025-07-15T04:39:38.669425012Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 15 04:39:38.670164 containerd[1889]: time="2025-07-15T04:39:38.669823748Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 15 04:39:38.670164 containerd[1889]: time="2025-07-15T04:39:38.669903196Z" level=info msg="metadata content store policy set" policy=shared Jul 15 04:39:38.686134 containerd[1889]: time="2025-07-15T04:39:38.686097260Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 15 04:39:38.686334 containerd[1889]: time="2025-07-15T04:39:38.686280724Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 15 04:39:38.686334 containerd[1889]: time="2025-07-15T04:39:38.686299148Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 15 04:39:38.686334 containerd[1889]: time="2025-07-15T04:39:38.686307508Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 15 04:39:38.686334 containerd[1889]: time="2025-07-15T04:39:38.686316236Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 15 04:39:38.686736 containerd[1889]: time="2025-07-15T04:39:38.686322612Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 15 04:39:38.686736 containerd[1889]: time="2025-07-15T04:39:38.686494052Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 15 04:39:38.686736 containerd[1889]: time="2025-07-15T04:39:38.686507068Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 15 04:39:38.686736 containerd[1889]: time="2025-07-15T04:39:38.686515532Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 15 04:39:38.686736 containerd[1889]: time="2025-07-15T04:39:38.686523364Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 15 04:39:38.686736 containerd[1889]: time="2025-07-15T04:39:38.686529780Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 15 04:39:38.686736 containerd[1889]: time="2025-07-15T04:39:38.686539148Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 15 04:39:38.686736 containerd[1889]: time="2025-07-15T04:39:38.686687356Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 15 04:39:38.686736 containerd[1889]: time="2025-07-15T04:39:38.686705132Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 15 04:39:38.686916 containerd[1889]: time="2025-07-15T04:39:38.686898692Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 15 04:39:38.686959 containerd[1889]: time="2025-07-15T04:39:38.686948564Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 15 04:39:38.686998 containerd[1889]: time="2025-07-15T04:39:38.686988356Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 15 04:39:38.687035 containerd[1889]: time="2025-07-15T04:39:38.687025948Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 15 04:39:38.687088 containerd[1889]: time="2025-07-15T04:39:38.687078044Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 15 04:39:38.687163 containerd[1889]: time="2025-07-15T04:39:38.687149140Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 15 04:39:38.687209 containerd[1889]: time="2025-07-15T04:39:38.687198292Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 15 04:39:38.687255 containerd[1889]: time="2025-07-15T04:39:38.687244556Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 15 04:39:38.687294 containerd[1889]: time="2025-07-15T04:39:38.687285100Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 15 04:39:38.687388 containerd[1889]: time="2025-07-15T04:39:38.687376588Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 15 04:39:38.687449 containerd[1889]: time="2025-07-15T04:39:38.687437628Z" level=info msg="Start snapshots syncer" Jul 15 04:39:38.687569 containerd[1889]: time="2025-07-15T04:39:38.687503692Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 15 04:39:38.687824 containerd[1889]: time="2025-07-15T04:39:38.687791004Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 15 04:39:38.688007 containerd[1889]: time="2025-07-15T04:39:38.687954244Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 15 04:39:38.688624 containerd[1889]: time="2025-07-15T04:39:38.688572324Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 15 04:39:38.688980 containerd[1889]: time="2025-07-15T04:39:38.688846564Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 15 04:39:38.688980 containerd[1889]: time="2025-07-15T04:39:38.688874500Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 15 04:39:38.688980 containerd[1889]: time="2025-07-15T04:39:38.688885244Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 15 04:39:38.688980 containerd[1889]: time="2025-07-15T04:39:38.688892404Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 15 04:39:38.688980 containerd[1889]: time="2025-07-15T04:39:38.688900500Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 15 04:39:38.688980 containerd[1889]: time="2025-07-15T04:39:38.688907156Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 15 04:39:38.688980 containerd[1889]: time="2025-07-15T04:39:38.688915300Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 15 04:39:38.688980 containerd[1889]: time="2025-07-15T04:39:38.688935676Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 15 04:39:38.688980 containerd[1889]: time="2025-07-15T04:39:38.688943308Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 15 04:39:38.688980 containerd[1889]: time="2025-07-15T04:39:38.688949812Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 15 04:39:38.689215 containerd[1889]: time="2025-07-15T04:39:38.689161068Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 04:39:38.689215 containerd[1889]: time="2025-07-15T04:39:38.689183876Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 04:39:38.689215 containerd[1889]: time="2025-07-15T04:39:38.689191436Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 04:39:38.689215 containerd[1889]: time="2025-07-15T04:39:38.689198588Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 04:39:38.689492 containerd[1889]: time="2025-07-15T04:39:38.689203708Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 15 04:39:38.689492 containerd[1889]: time="2025-07-15T04:39:38.689387588Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 15 04:39:38.689492 containerd[1889]: time="2025-07-15T04:39:38.689399788Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 15 04:39:38.689492 containerd[1889]: time="2025-07-15T04:39:38.689412500Z" level=info msg="runtime interface created" Jul 15 04:39:38.689492 containerd[1889]: time="2025-07-15T04:39:38.689415868Z" level=info msg="created NRI interface" Jul 15 04:39:38.689492 containerd[1889]: time="2025-07-15T04:39:38.689421084Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 15 04:39:38.689492 containerd[1889]: time="2025-07-15T04:39:38.689430060Z" level=info msg="Connect containerd service" Jul 15 04:39:38.689492 containerd[1889]: time="2025-07-15T04:39:38.689452732Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 15 04:39:38.692416 containerd[1889]: time="2025-07-15T04:39:38.691691556Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 04:39:38.734723 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:39:38.741034 (kubelet)[2044]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 04:39:38.994164 kubelet[2044]: E0715 04:39:38.994030 2044 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 04:39:38.996321 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 04:39:38.996549 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 04:39:38.996954 systemd[1]: kubelet.service: Consumed 546ms CPU time, 255.1M memory peak. Jul 15 04:39:39.323232 containerd[1889]: time="2025-07-15T04:39:39.323040180Z" level=info msg="Start subscribing containerd event" Jul 15 04:39:39.323232 containerd[1889]: time="2025-07-15T04:39:39.323092228Z" level=info msg="Start recovering state" Jul 15 04:39:39.323232 containerd[1889]: time="2025-07-15T04:39:39.323202588Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 04:39:39.323496 containerd[1889]: time="2025-07-15T04:39:39.323376588Z" level=info msg="Start event monitor" Jul 15 04:39:39.323496 containerd[1889]: time="2025-07-15T04:39:39.323396236Z" level=info msg="Start cni network conf syncer for default" Jul 15 04:39:39.323496 containerd[1889]: time="2025-07-15T04:39:39.323404780Z" level=info msg="Start streaming server" Jul 15 04:39:39.323496 containerd[1889]: time="2025-07-15T04:39:39.323411716Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 15 04:39:39.323496 containerd[1889]: time="2025-07-15T04:39:39.323417332Z" level=info msg="runtime interface starting up..." Jul 15 04:39:39.323496 containerd[1889]: time="2025-07-15T04:39:39.323421788Z" level=info msg="starting plugins..." Jul 15 04:39:39.323496 containerd[1889]: time="2025-07-15T04:39:39.323464004Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 04:39:39.325122 containerd[1889]: time="2025-07-15T04:39:39.325091380Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 15 04:39:39.329006 containerd[1889]: time="2025-07-15T04:39:39.325246260Z" level=info msg="containerd successfully booted in 0.670507s" Jul 15 04:39:39.325346 systemd[1]: Started containerd.service - containerd container runtime. Jul 15 04:39:39.330459 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 15 04:39:39.336233 systemd[1]: Startup finished in 1.644s (kernel) + 10.652s (initrd) + 9.784s (userspace) = 22.082s. Jul 15 04:39:39.537299 login[2015]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jul 15 04:39:39.538411 login[2017]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:39:39.545621 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 15 04:39:39.546461 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 15 04:39:39.554777 systemd-logind[1868]: New session 2 of user core. Jul 15 04:39:39.564738 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 15 04:39:39.567932 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 15 04:39:39.576288 (systemd)[2067]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 04:39:39.578502 systemd-logind[1868]: New session c1 of user core. Jul 15 04:39:39.749843 waagent[2019]: 2025-07-15T04:39:39.749189Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jul 15 04:39:39.754724 waagent[2019]: 2025-07-15T04:39:39.754383Z INFO Daemon Daemon OS: flatcar 4396.0.0 Jul 15 04:39:39.758049 waagent[2019]: 2025-07-15T04:39:39.758016Z INFO Daemon Daemon Python: 3.11.13 Jul 15 04:39:39.761735 waagent[2019]: 2025-07-15T04:39:39.761642Z INFO Daemon Daemon Run daemon Jul 15 04:39:39.765104 waagent[2019]: 2025-07-15T04:39:39.765072Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4396.0.0' Jul 15 04:39:39.771768 waagent[2019]: 2025-07-15T04:39:39.771729Z INFO Daemon Daemon Using waagent for provisioning Jul 15 04:39:39.776019 waagent[2019]: 2025-07-15T04:39:39.775981Z INFO Daemon Daemon Activate resource disk Jul 15 04:39:39.779717 waagent[2019]: 2025-07-15T04:39:39.779440Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 15 04:39:39.787667 waagent[2019]: 2025-07-15T04:39:39.787630Z INFO Daemon Daemon Found device: None Jul 15 04:39:39.790952 waagent[2019]: 2025-07-15T04:39:39.790921Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 15 04:39:39.796768 waagent[2019]: 2025-07-15T04:39:39.796738Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 15 04:39:39.802678 systemd[2067]: Queued start job for default target default.target. Jul 15 04:39:39.806241 waagent[2019]: 2025-07-15T04:39:39.806190Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 15 04:39:39.810596 waagent[2019]: 2025-07-15T04:39:39.810565Z INFO Daemon Daemon Running default provisioning handler Jul 15 04:39:39.819391 systemd[2067]: Created slice app.slice - User Application Slice. Jul 15 04:39:39.819415 systemd[2067]: Reached target paths.target - Paths. Jul 15 04:39:39.819449 systemd[2067]: Reached target timers.target - Timers. Jul 15 04:39:39.821853 systemd[2067]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 15 04:39:39.829858 waagent[2019]: 2025-07-15T04:39:39.821450Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 15 04:39:39.829531 systemd[2067]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 15 04:39:39.829573 systemd[2067]: Reached target sockets.target - Sockets. Jul 15 04:39:39.829615 systemd[2067]: Reached target basic.target - Basic System. Jul 15 04:39:39.829643 systemd[2067]: Reached target default.target - Main User Target. Jul 15 04:39:39.829661 systemd[2067]: Startup finished in 245ms. Jul 15 04:39:39.831374 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 15 04:39:39.832865 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 15 04:39:39.833623 waagent[2019]: 2025-07-15T04:39:39.833588Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 15 04:39:39.841313 waagent[2019]: 2025-07-15T04:39:39.841270Z INFO Daemon Daemon cloud-init is enabled: False Jul 15 04:39:39.845835 waagent[2019]: 2025-07-15T04:39:39.845791Z INFO Daemon Daemon Copying ovf-env.xml Jul 15 04:39:39.960548 waagent[2019]: 2025-07-15T04:39:39.958282Z INFO Daemon Daemon Successfully mounted dvd Jul 15 04:39:40.026308 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 15 04:39:40.028770 waagent[2019]: 2025-07-15T04:39:40.028252Z INFO Daemon Daemon Detect protocol endpoint Jul 15 04:39:40.032059 waagent[2019]: 2025-07-15T04:39:40.032022Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 15 04:39:40.036327 waagent[2019]: 2025-07-15T04:39:40.036299Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 15 04:39:40.040958 waagent[2019]: 2025-07-15T04:39:40.040934Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 15 04:39:40.044874 waagent[2019]: 2025-07-15T04:39:40.044840Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 15 04:39:40.048353 waagent[2019]: 2025-07-15T04:39:40.048326Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 15 04:39:40.062181 waagent[2019]: 2025-07-15T04:39:40.062146Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 15 04:39:40.070219 waagent[2019]: 2025-07-15T04:39:40.070195Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 15 04:39:40.075009 waagent[2019]: 2025-07-15T04:39:40.074982Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 15 04:39:40.166507 waagent[2019]: 2025-07-15T04:39:40.166360Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 15 04:39:40.171238 waagent[2019]: 2025-07-15T04:39:40.171192Z INFO Daemon Daemon Forcing an update of the goal state. Jul 15 04:39:40.179188 waagent[2019]: 2025-07-15T04:39:40.179142Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 15 04:39:40.199890 waagent[2019]: 2025-07-15T04:39:40.199855Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jul 15 04:39:40.206310 waagent[2019]: 2025-07-15T04:39:40.206269Z INFO Daemon Jul 15 04:39:40.208603 waagent[2019]: 2025-07-15T04:39:40.208574Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 33cf88b7-2bb7-4574-91cf-7d2f0020ca11 eTag: 7780421212761379331 source: Fabric] Jul 15 04:39:40.217899 waagent[2019]: 2025-07-15T04:39:40.217868Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 15 04:39:40.222837 waagent[2019]: 2025-07-15T04:39:40.222804Z INFO Daemon Jul 15 04:39:40.225399 waagent[2019]: 2025-07-15T04:39:40.225371Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 15 04:39:40.234797 waagent[2019]: 2025-07-15T04:39:40.234770Z INFO Daemon Daemon Downloading artifacts profile blob Jul 15 04:39:40.312415 waagent[2019]: 2025-07-15T04:39:40.312343Z INFO Daemon Downloaded certificate {'thumbprint': 'D2E541FBA9ECA2CBB6DC4C2A5785337BC94A5EC1', 'hasPrivateKey': False} Jul 15 04:39:40.320310 waagent[2019]: 2025-07-15T04:39:40.320273Z INFO Daemon Downloaded certificate {'thumbprint': '91B0E645629875EACCC9A3D35CBCEB7D50CA9C51', 'hasPrivateKey': True} Jul 15 04:39:40.327494 waagent[2019]: 2025-07-15T04:39:40.327456Z INFO Daemon Fetch goal state completed Jul 15 04:39:40.337919 waagent[2019]: 2025-07-15T04:39:40.337874Z INFO Daemon Daemon Starting provisioning Jul 15 04:39:40.341898 waagent[2019]: 2025-07-15T04:39:40.341870Z INFO Daemon Daemon Handle ovf-env.xml. Jul 15 04:39:40.345457 waagent[2019]: 2025-07-15T04:39:40.345432Z INFO Daemon Daemon Set hostname [ci-4396.0.0-n-9104e8bf1a] Jul 15 04:39:40.365771 waagent[2019]: 2025-07-15T04:39:40.365697Z INFO Daemon Daemon Publish hostname [ci-4396.0.0-n-9104e8bf1a] Jul 15 04:39:40.370790 waagent[2019]: 2025-07-15T04:39:40.370752Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 15 04:39:40.375709 waagent[2019]: 2025-07-15T04:39:40.375679Z INFO Daemon Daemon Primary interface is [eth0] Jul 15 04:39:40.385386 systemd-networkd[1479]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 04:39:40.385392 systemd-networkd[1479]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 04:39:40.385423 systemd-networkd[1479]: eth0: DHCP lease lost Jul 15 04:39:40.386355 waagent[2019]: 2025-07-15T04:39:40.386309Z INFO Daemon Daemon Create user account if not exists Jul 15 04:39:40.390388 waagent[2019]: 2025-07-15T04:39:40.390353Z INFO Daemon Daemon User core already exists, skip useradd Jul 15 04:39:40.394335 waagent[2019]: 2025-07-15T04:39:40.394307Z INFO Daemon Daemon Configure sudoer Jul 15 04:39:40.402243 waagent[2019]: 2025-07-15T04:39:40.402198Z INFO Daemon Daemon Configure sshd Jul 15 04:39:40.409629 waagent[2019]: 2025-07-15T04:39:40.409586Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 15 04:39:40.418827 waagent[2019]: 2025-07-15T04:39:40.418768Z INFO Daemon Daemon Deploy ssh public key. Jul 15 04:39:40.426772 systemd-networkd[1479]: eth0: DHCPv4 address 10.200.20.21/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 15 04:39:40.538049 login[2015]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:39:40.541967 systemd-logind[1868]: New session 1 of user core. Jul 15 04:39:40.547831 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 15 04:39:41.546738 waagent[2019]: 2025-07-15T04:39:41.546104Z INFO Daemon Daemon Provisioning complete Jul 15 04:39:41.562386 waagent[2019]: 2025-07-15T04:39:41.562343Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 15 04:39:41.570529 waagent[2019]: 2025-07-15T04:39:41.570482Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 15 04:39:41.577469 waagent[2019]: 2025-07-15T04:39:41.577436Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jul 15 04:39:41.676745 waagent[2124]: 2025-07-15T04:39:41.676473Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jul 15 04:39:41.676745 waagent[2124]: 2025-07-15T04:39:41.676604Z INFO ExtHandler ExtHandler OS: flatcar 4396.0.0 Jul 15 04:39:41.676745 waagent[2124]: 2025-07-15T04:39:41.676641Z INFO ExtHandler ExtHandler Python: 3.11.13 Jul 15 04:39:41.676745 waagent[2124]: 2025-07-15T04:39:41.676676Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Jul 15 04:39:41.721747 waagent[2124]: 2025-07-15T04:39:41.721298Z INFO ExtHandler ExtHandler Distro: flatcar-4396.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jul 15 04:39:41.721747 waagent[2124]: 2025-07-15T04:39:41.721512Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 15 04:39:41.721747 waagent[2124]: 2025-07-15T04:39:41.721561Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 15 04:39:41.727915 waagent[2124]: 2025-07-15T04:39:41.727866Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 15 04:39:41.732982 waagent[2124]: 2025-07-15T04:39:41.732937Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jul 15 04:39:41.733440 waagent[2124]: 2025-07-15T04:39:41.733406Z INFO ExtHandler Jul 15 04:39:41.733565 waagent[2124]: 2025-07-15T04:39:41.733543Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 0af6f982-1578-4320-b9ab-7e0aa9bde1dd eTag: 7780421212761379331 source: Fabric] Jul 15 04:39:41.733895 waagent[2124]: 2025-07-15T04:39:41.733862Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 15 04:39:41.734398 waagent[2124]: 2025-07-15T04:39:41.734364Z INFO ExtHandler Jul 15 04:39:41.734523 waagent[2124]: 2025-07-15T04:39:41.734501Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 15 04:39:41.740747 waagent[2124]: 2025-07-15T04:39:41.739902Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 15 04:39:41.799510 waagent[2124]: 2025-07-15T04:39:41.799401Z INFO ExtHandler Downloaded certificate {'thumbprint': 'D2E541FBA9ECA2CBB6DC4C2A5785337BC94A5EC1', 'hasPrivateKey': False} Jul 15 04:39:41.800001 waagent[2124]: 2025-07-15T04:39:41.799961Z INFO ExtHandler Downloaded certificate {'thumbprint': '91B0E645629875EACCC9A3D35CBCEB7D50CA9C51', 'hasPrivateKey': True} Jul 15 04:39:41.800425 waagent[2124]: 2025-07-15T04:39:41.800390Z INFO ExtHandler Fetch goal state completed Jul 15 04:39:41.813535 waagent[2124]: 2025-07-15T04:39:41.813493Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.1 11 Feb 2025 (Library: OpenSSL 3.4.1 11 Feb 2025) Jul 15 04:39:41.817028 waagent[2124]: 2025-07-15T04:39:41.816987Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2124 Jul 15 04:39:41.817253 waagent[2124]: 2025-07-15T04:39:41.817223Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 15 04:39:41.817584 waagent[2124]: 2025-07-15T04:39:41.817551Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jul 15 04:39:41.818837 waagent[2124]: 2025-07-15T04:39:41.818797Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4396.0.0', '', 'Flatcar Container Linux by Kinvolk'] Jul 15 04:39:41.819247 waagent[2124]: 2025-07-15T04:39:41.819211Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4396.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jul 15 04:39:41.819441 waagent[2124]: 2025-07-15T04:39:41.819413Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jul 15 04:39:41.820020 waagent[2124]: 2025-07-15T04:39:41.819987Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 15 04:39:41.849274 waagent[2124]: 2025-07-15T04:39:41.849238Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 15 04:39:41.849607 waagent[2124]: 2025-07-15T04:39:41.849580Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 15 04:39:41.854227 waagent[2124]: 2025-07-15T04:39:41.854202Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 15 04:39:41.858909 systemd[1]: Reload requested from client PID 2141 ('systemctl') (unit waagent.service)... Jul 15 04:39:41.858924 systemd[1]: Reloading... Jul 15 04:39:41.925747 zram_generator::config[2176]: No configuration found. Jul 15 04:39:41.997540 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 04:39:42.080373 systemd[1]: Reloading finished in 221 ms. Jul 15 04:39:42.101891 waagent[2124]: 2025-07-15T04:39:42.099027Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 15 04:39:42.101891 waagent[2124]: 2025-07-15T04:39:42.099166Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 15 04:39:42.690758 waagent[2124]: 2025-07-15T04:39:42.690595Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 15 04:39:42.691059 waagent[2124]: 2025-07-15T04:39:42.690938Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jul 15 04:39:42.691635 waagent[2124]: 2025-07-15T04:39:42.691592Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 15 04:39:42.691966 waagent[2124]: 2025-07-15T04:39:42.691879Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 15 04:39:42.692733 waagent[2124]: 2025-07-15T04:39:42.692133Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 15 04:39:42.692733 waagent[2124]: 2025-07-15T04:39:42.692204Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 15 04:39:42.692733 waagent[2124]: 2025-07-15T04:39:42.692366Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 15 04:39:42.692733 waagent[2124]: 2025-07-15T04:39:42.692498Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 15 04:39:42.692733 waagent[2124]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 15 04:39:42.692733 waagent[2124]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jul 15 04:39:42.692733 waagent[2124]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 15 04:39:42.692733 waagent[2124]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 15 04:39:42.692733 waagent[2124]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 15 04:39:42.692733 waagent[2124]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 15 04:39:42.693019 waagent[2124]: 2025-07-15T04:39:42.692979Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 15 04:39:42.693152 waagent[2124]: 2025-07-15T04:39:42.693124Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 15 04:39:42.693216 waagent[2124]: 2025-07-15T04:39:42.693180Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 15 04:39:42.693464 waagent[2124]: 2025-07-15T04:39:42.693428Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 15 04:39:42.693509 waagent[2124]: 2025-07-15T04:39:42.693468Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 15 04:39:42.693928 waagent[2124]: 2025-07-15T04:39:42.693895Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 15 04:39:42.694032 waagent[2124]: 2025-07-15T04:39:42.694012Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 15 04:39:42.694570 waagent[2124]: 2025-07-15T04:39:42.694541Z INFO EnvHandler ExtHandler Configure routes Jul 15 04:39:42.694804 waagent[2124]: 2025-07-15T04:39:42.694776Z INFO EnvHandler ExtHandler Gateway:None Jul 15 04:39:42.694932 waagent[2124]: 2025-07-15T04:39:42.694905Z INFO EnvHandler ExtHandler Routes:None Jul 15 04:39:42.699807 waagent[2124]: 2025-07-15T04:39:42.699767Z INFO ExtHandler ExtHandler Jul 15 04:39:42.699858 waagent[2124]: 2025-07-15T04:39:42.699842Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: c9a283a8-af1d-43eb-83be-8a22b35fb37d correlation 3975c874-0b1d-430a-a917-251a92c38fff created: 2025-07-15T04:38:38.510902Z] Jul 15 04:39:42.700482 waagent[2124]: 2025-07-15T04:39:42.700442Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 15 04:39:42.701009 waagent[2124]: 2025-07-15T04:39:42.700975Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jul 15 04:39:42.728757 waagent[2124]: 2025-07-15T04:39:42.728364Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jul 15 04:39:42.728757 waagent[2124]: Try `iptables -h' or 'iptables --help' for more information.) Jul 15 04:39:42.728910 waagent[2124]: 2025-07-15T04:39:42.728705Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: B164D415-679C-4CCE-AC64-EA2C5020ECD9;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jul 15 04:39:42.743148 waagent[2124]: 2025-07-15T04:39:42.743100Z INFO MonitorHandler ExtHandler Network interfaces: Jul 15 04:39:42.743148 waagent[2124]: Executing ['ip', '-a', '-o', 'link']: Jul 15 04:39:42.743148 waagent[2124]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 15 04:39:42.743148 waagent[2124]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b9:4b:95 brd ff:ff:ff:ff:ff:ff Jul 15 04:39:42.743148 waagent[2124]: 3: enP5570s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b9:4b:95 brd ff:ff:ff:ff:ff:ff\ altname enP5570p0s2 Jul 15 04:39:42.743148 waagent[2124]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 15 04:39:42.743148 waagent[2124]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 15 04:39:42.743148 waagent[2124]: 2: eth0 inet 10.200.20.21/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 15 04:39:42.743148 waagent[2124]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 15 04:39:42.743148 waagent[2124]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 15 04:39:42.743148 waagent[2124]: 2: eth0 inet6 fe80::222:48ff:feb9:4b95/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 15 04:39:42.743148 waagent[2124]: 3: enP5570s1 inet6 fe80::222:48ff:feb9:4b95/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 15 04:39:42.803554 waagent[2124]: 2025-07-15T04:39:42.803497Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jul 15 04:39:42.803554 waagent[2124]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 15 04:39:42.803554 waagent[2124]: pkts bytes target prot opt in out source destination Jul 15 04:39:42.803554 waagent[2124]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 15 04:39:42.803554 waagent[2124]: pkts bytes target prot opt in out source destination Jul 15 04:39:42.803554 waagent[2124]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 15 04:39:42.803554 waagent[2124]: pkts bytes target prot opt in out source destination Jul 15 04:39:42.803554 waagent[2124]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 15 04:39:42.803554 waagent[2124]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 15 04:39:42.803554 waagent[2124]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 15 04:39:42.806345 waagent[2124]: 2025-07-15T04:39:42.806296Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 15 04:39:42.806345 waagent[2124]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 15 04:39:42.806345 waagent[2124]: pkts bytes target prot opt in out source destination Jul 15 04:39:42.806345 waagent[2124]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 15 04:39:42.806345 waagent[2124]: pkts bytes target prot opt in out source destination Jul 15 04:39:42.806345 waagent[2124]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 15 04:39:42.806345 waagent[2124]: pkts bytes target prot opt in out source destination Jul 15 04:39:42.806345 waagent[2124]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 15 04:39:42.806345 waagent[2124]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 15 04:39:42.806345 waagent[2124]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 15 04:39:42.806535 waagent[2124]: 2025-07-15T04:39:42.806510Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jul 15 04:39:49.247680 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 04:39:49.249582 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:39:49.356742 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:39:49.367953 (kubelet)[2274]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 04:39:49.473770 kubelet[2274]: E0715 04:39:49.473693 2274 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 04:39:49.476611 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 04:39:49.476903 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 04:39:49.477511 systemd[1]: kubelet.service: Consumed 114ms CPU time, 107.7M memory peak. Jul 15 04:39:55.150704 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 15 04:39:55.152300 systemd[1]: Started sshd@0-10.200.20.21:22-10.200.16.10:44646.service - OpenSSH per-connection server daemon (10.200.16.10:44646). Jul 15 04:39:55.722383 sshd[2281]: Accepted publickey for core from 10.200.16.10 port 44646 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:39:55.723479 sshd-session[2281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:39:55.727014 systemd-logind[1868]: New session 3 of user core. Jul 15 04:39:55.731838 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 15 04:39:56.133519 systemd[1]: Started sshd@1-10.200.20.21:22-10.200.16.10:44662.service - OpenSSH per-connection server daemon (10.200.16.10:44662). Jul 15 04:39:56.588486 sshd[2287]: Accepted publickey for core from 10.200.16.10 port 44662 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:39:56.589542 sshd-session[2287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:39:56.593133 systemd-logind[1868]: New session 4 of user core. Jul 15 04:39:56.600870 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 15 04:39:56.919304 sshd[2290]: Connection closed by 10.200.16.10 port 44662 Jul 15 04:39:56.920033 sshd-session[2287]: pam_unix(sshd:session): session closed for user core Jul 15 04:39:56.924199 systemd-logind[1868]: Session 4 logged out. Waiting for processes to exit. Jul 15 04:39:56.924435 systemd[1]: sshd@1-10.200.20.21:22-10.200.16.10:44662.service: Deactivated successfully. Jul 15 04:39:56.925824 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 04:39:56.927204 systemd-logind[1868]: Removed session 4. Jul 15 04:39:57.008258 systemd[1]: Started sshd@2-10.200.20.21:22-10.200.16.10:44670.service - OpenSSH per-connection server daemon (10.200.16.10:44670). Jul 15 04:39:57.465913 sshd[2296]: Accepted publickey for core from 10.200.16.10 port 44670 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:39:57.467003 sshd-session[2296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:39:57.470909 systemd-logind[1868]: New session 5 of user core. Jul 15 04:39:57.476839 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 15 04:39:57.805853 sshd[2299]: Connection closed by 10.200.16.10 port 44670 Jul 15 04:39:57.806353 sshd-session[2296]: pam_unix(sshd:session): session closed for user core Jul 15 04:39:57.809229 systemd[1]: sshd@2-10.200.20.21:22-10.200.16.10:44670.service: Deactivated successfully. Jul 15 04:39:57.810507 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 04:39:57.811626 systemd-logind[1868]: Session 5 logged out. Waiting for processes to exit. Jul 15 04:39:57.812551 systemd-logind[1868]: Removed session 5. Jul 15 04:39:57.892208 systemd[1]: Started sshd@3-10.200.20.21:22-10.200.16.10:44686.service - OpenSSH per-connection server daemon (10.200.16.10:44686). Jul 15 04:39:58.348726 sshd[2305]: Accepted publickey for core from 10.200.16.10 port 44686 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:39:58.349801 sshd-session[2305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:39:58.353639 systemd-logind[1868]: New session 6 of user core. Jul 15 04:39:58.359843 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 15 04:39:58.690926 sshd[2308]: Connection closed by 10.200.16.10 port 44686 Jul 15 04:39:58.691454 sshd-session[2305]: pam_unix(sshd:session): session closed for user core Jul 15 04:39:58.694735 systemd[1]: sshd@3-10.200.20.21:22-10.200.16.10:44686.service: Deactivated successfully. Jul 15 04:39:58.696165 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 04:39:58.696855 systemd-logind[1868]: Session 6 logged out. Waiting for processes to exit. Jul 15 04:39:58.698282 systemd-logind[1868]: Removed session 6. Jul 15 04:39:58.772352 systemd[1]: Started sshd@4-10.200.20.21:22-10.200.16.10:44690.service - OpenSSH per-connection server daemon (10.200.16.10:44690). Jul 15 04:39:59.229054 sshd[2314]: Accepted publickey for core from 10.200.16.10 port 44690 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:39:59.230093 sshd-session[2314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:39:59.233777 systemd-logind[1868]: New session 7 of user core. Jul 15 04:39:59.239943 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 15 04:39:59.594374 sudo[2318]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 15 04:39:59.594609 sudo[2318]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 04:39:59.595536 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 15 04:39:59.597859 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:39:59.603134 sudo[2318]: pam_unix(sudo:session): session closed for user root Jul 15 04:39:59.678741 sshd[2317]: Connection closed by 10.200.16.10 port 44690 Jul 15 04:39:59.679337 sshd-session[2314]: pam_unix(sshd:session): session closed for user core Jul 15 04:39:59.683073 systemd[1]: sshd@4-10.200.20.21:22-10.200.16.10:44690.service: Deactivated successfully. Jul 15 04:39:59.684467 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 04:39:59.686006 systemd-logind[1868]: Session 7 logged out. Waiting for processes to exit. Jul 15 04:39:59.687210 systemd-logind[1868]: Removed session 7. Jul 15 04:39:59.776452 systemd[1]: Started sshd@5-10.200.20.21:22-10.200.16.10:44696.service - OpenSSH per-connection server daemon (10.200.16.10:44696). Jul 15 04:39:59.943855 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:39:59.950000 (kubelet)[2335]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 04:39:59.977477 kubelet[2335]: E0715 04:39:59.977413 2335 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 04:39:59.979635 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 04:39:59.979887 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 04:39:59.980456 systemd[1]: kubelet.service: Consumed 108ms CPU time, 107.8M memory peak. Jul 15 04:40:00.257689 sshd[2329]: Accepted publickey for core from 10.200.16.10 port 44696 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:40:00.258786 sshd-session[2329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:40:00.262523 systemd-logind[1868]: New session 8 of user core. Jul 15 04:40:00.269920 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 15 04:40:00.525272 sudo[2343]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 15 04:40:00.525478 sudo[2343]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 04:40:00.539424 sudo[2343]: pam_unix(sudo:session): session closed for user root Jul 15 04:40:00.543315 sudo[2342]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 15 04:40:00.543524 sudo[2342]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 04:40:00.550623 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 04:40:00.580267 augenrules[2365]: No rules Jul 15 04:40:00.581502 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 04:40:00.581686 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 04:40:00.582486 sudo[2342]: pam_unix(sudo:session): session closed for user root Jul 15 04:40:00.661427 sshd[2341]: Connection closed by 10.200.16.10 port 44696 Jul 15 04:40:00.660691 sshd-session[2329]: pam_unix(sshd:session): session closed for user core Jul 15 04:40:00.664458 systemd[1]: sshd@5-10.200.20.21:22-10.200.16.10:44696.service: Deactivated successfully. Jul 15 04:40:00.665893 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 04:40:00.666473 systemd-logind[1868]: Session 8 logged out. Waiting for processes to exit. Jul 15 04:40:00.668091 systemd-logind[1868]: Removed session 8. Jul 15 04:40:00.750097 systemd[1]: Started sshd@6-10.200.20.21:22-10.200.16.10:44118.service - OpenSSH per-connection server daemon (10.200.16.10:44118). Jul 15 04:40:01.208112 sshd[2374]: Accepted publickey for core from 10.200.16.10 port 44118 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:40:01.209221 sshd-session[2374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:40:01.212869 systemd-logind[1868]: New session 9 of user core. Jul 15 04:40:01.219863 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 15 04:40:01.464179 sudo[2378]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 04:40:01.464393 sudo[2378]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 04:40:01.741828 chronyd[1861]: Selected source PHC0 Jul 15 04:40:02.571332 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 15 04:40:02.586060 (dockerd)[2395]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 15 04:40:03.152608 dockerd[2395]: time="2025-07-15T04:40:03.152487360Z" level=info msg="Starting up" Jul 15 04:40:03.155293 dockerd[2395]: time="2025-07-15T04:40:03.155252375Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 15 04:40:03.163470 dockerd[2395]: time="2025-07-15T04:40:03.163390455Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 15 04:40:03.259294 dockerd[2395]: time="2025-07-15T04:40:03.259251663Z" level=info msg="Loading containers: start." Jul 15 04:40:03.300932 kernel: Initializing XFRM netlink socket Jul 15 04:40:03.635269 systemd-networkd[1479]: docker0: Link UP Jul 15 04:40:03.662307 dockerd[2395]: time="2025-07-15T04:40:03.662198814Z" level=info msg="Loading containers: done." Jul 15 04:40:03.686125 dockerd[2395]: time="2025-07-15T04:40:03.686076492Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 04:40:03.686287 dockerd[2395]: time="2025-07-15T04:40:03.686168484Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 15 04:40:03.686287 dockerd[2395]: time="2025-07-15T04:40:03.686257356Z" level=info msg="Initializing buildkit" Jul 15 04:40:03.757469 dockerd[2395]: time="2025-07-15T04:40:03.757392135Z" level=info msg="Completed buildkit initialization" Jul 15 04:40:03.762852 dockerd[2395]: time="2025-07-15T04:40:03.762801887Z" level=info msg="Daemon has completed initialization" Jul 15 04:40:03.763389 dockerd[2395]: time="2025-07-15T04:40:03.763091703Z" level=info msg="API listen on /run/docker.sock" Jul 15 04:40:03.763329 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 15 04:40:04.587313 containerd[1889]: time="2025-07-15T04:40:04.587267639Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 15 04:40:05.643153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3522186267.mount: Deactivated successfully. Jul 15 04:40:06.933026 containerd[1889]: time="2025-07-15T04:40:06.932969295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:06.938740 containerd[1889]: time="2025-07-15T04:40:06.938656863Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=25651793" Jul 15 04:40:06.944731 containerd[1889]: time="2025-07-15T04:40:06.944674831Z" level=info msg="ImageCreate event name:\"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:06.951740 containerd[1889]: time="2025-07-15T04:40:06.951545463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:06.952485 containerd[1889]: time="2025-07-15T04:40:06.952463055Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"25648593\" in 2.365026632s" Jul 15 04:40:06.952586 containerd[1889]: time="2025-07-15T04:40:06.952574191Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 15 04:40:06.953975 containerd[1889]: time="2025-07-15T04:40:06.953929159Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 15 04:40:08.194436 containerd[1889]: time="2025-07-15T04:40:08.193838551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:08.198777 containerd[1889]: time="2025-07-15T04:40:08.198747431Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=22459677" Jul 15 04:40:08.204843 containerd[1889]: time="2025-07-15T04:40:08.204813767Z" level=info msg="ImageCreate event name:\"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:08.212854 containerd[1889]: time="2025-07-15T04:40:08.212815159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:08.213545 containerd[1889]: time="2025-07-15T04:40:08.213516583Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"23995467\" in 1.259549304s" Jul 15 04:40:08.213944 containerd[1889]: time="2025-07-15T04:40:08.213827647Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 15 04:40:08.216567 containerd[1889]: time="2025-07-15T04:40:08.216542247Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 15 04:40:09.294461 containerd[1889]: time="2025-07-15T04:40:09.294406175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:09.299417 containerd[1889]: time="2025-07-15T04:40:09.299247079Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=17125066" Jul 15 04:40:09.304195 containerd[1889]: time="2025-07-15T04:40:09.304170719Z" level=info msg="ImageCreate event name:\"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:09.311857 containerd[1889]: time="2025-07-15T04:40:09.311805191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:09.312517 containerd[1889]: time="2025-07-15T04:40:09.312407439Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"18660874\" in 1.095834496s" Jul 15 04:40:09.312517 containerd[1889]: time="2025-07-15T04:40:09.312435935Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 15 04:40:09.312950 containerd[1889]: time="2025-07-15T04:40:09.312919751Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 15 04:40:10.115946 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 15 04:40:10.117288 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:40:10.301415 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:40:10.312001 (kubelet)[2673]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 04:40:10.407923 kubelet[2673]: E0715 04:40:10.407771 2673 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 04:40:10.410001 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 04:40:10.410234 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 04:40:10.410801 systemd[1]: kubelet.service: Consumed 106ms CPU time, 106.9M memory peak. Jul 15 04:40:11.046703 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3025229487.mount: Deactivated successfully. Jul 15 04:40:11.387205 containerd[1889]: time="2025-07-15T04:40:11.386763340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:11.391743 containerd[1889]: time="2025-07-15T04:40:11.391706552Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915957" Jul 15 04:40:11.395623 containerd[1889]: time="2025-07-15T04:40:11.395557750Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:11.399840 containerd[1889]: time="2025-07-15T04:40:11.399786897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:11.400410 containerd[1889]: time="2025-07-15T04:40:11.400085484Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 2.087138157s" Jul 15 04:40:11.400410 containerd[1889]: time="2025-07-15T04:40:11.400114573Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 15 04:40:11.400645 containerd[1889]: time="2025-07-15T04:40:11.400598894Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 15 04:40:12.197561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3476312773.mount: Deactivated successfully. Jul 15 04:40:14.066956 containerd[1889]: time="2025-07-15T04:40:14.066892774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:14.070804 containerd[1889]: time="2025-07-15T04:40:14.070756829Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jul 15 04:40:14.076917 containerd[1889]: time="2025-07-15T04:40:14.076891370Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:14.083403 containerd[1889]: time="2025-07-15T04:40:14.083351428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:14.084279 containerd[1889]: time="2025-07-15T04:40:14.083891174Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.683246687s" Jul 15 04:40:14.084279 containerd[1889]: time="2025-07-15T04:40:14.083924832Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 15 04:40:14.084455 containerd[1889]: time="2025-07-15T04:40:14.084428561Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 04:40:14.703011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2237861434.mount: Deactivated successfully. Jul 15 04:40:14.739493 containerd[1889]: time="2025-07-15T04:40:14.739407592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 04:40:14.743806 containerd[1889]: time="2025-07-15T04:40:14.743769912Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 15 04:40:14.750738 containerd[1889]: time="2025-07-15T04:40:14.750677872Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 04:40:14.758175 containerd[1889]: time="2025-07-15T04:40:14.758114227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 04:40:14.758577 containerd[1889]: time="2025-07-15T04:40:14.758422462Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 673.964844ms" Jul 15 04:40:14.758577 containerd[1889]: time="2025-07-15T04:40:14.758451127Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 15 04:40:14.758923 containerd[1889]: time="2025-07-15T04:40:14.758898631Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 15 04:40:15.571642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2352461041.mount: Deactivated successfully. Jul 15 04:40:18.646236 containerd[1889]: time="2025-07-15T04:40:18.646176058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:18.650141 containerd[1889]: time="2025-07-15T04:40:18.649945543Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406465" Jul 15 04:40:18.658678 containerd[1889]: time="2025-07-15T04:40:18.658651180Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:18.666166 containerd[1889]: time="2025-07-15T04:40:18.666114530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:18.666854 containerd[1889]: time="2025-07-15T04:40:18.666744765Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.907819606s" Jul 15 04:40:18.666854 containerd[1889]: time="2025-07-15T04:40:18.666772374Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 15 04:40:20.616003 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 15 04:40:20.618900 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:40:20.722876 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:40:20.728184 (kubelet)[2825]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 04:40:20.857887 kubelet[2825]: E0715 04:40:20.857833 2825 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 04:40:20.861583 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 04:40:20.861840 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 04:40:20.862273 systemd[1]: kubelet.service: Consumed 203ms CPU time, 106.3M memory peak. Jul 15 04:40:21.380536 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:40:21.380738 systemd[1]: kubelet.service: Consumed 203ms CPU time, 106.3M memory peak. Jul 15 04:40:21.382565 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:40:21.402693 systemd[1]: Reload requested from client PID 2839 ('systemctl') (unit session-9.scope)... Jul 15 04:40:21.402705 systemd[1]: Reloading... Jul 15 04:40:21.499744 zram_generator::config[2884]: No configuration found. Jul 15 04:40:21.569493 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 04:40:21.652412 systemd[1]: Reloading finished in 249 ms. Jul 15 04:40:21.700139 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 15 04:40:21.700202 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 15 04:40:21.701748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:40:21.701789 systemd[1]: kubelet.service: Consumed 74ms CPU time, 95M memory peak. Jul 15 04:40:21.702937 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:40:21.950976 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:40:21.955959 (kubelet)[2951]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 04:40:21.981267 kubelet[2951]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 04:40:21.981267 kubelet[2951]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 15 04:40:21.981267 kubelet[2951]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 04:40:21.981267 kubelet[2951]: I0715 04:40:21.980768 2951 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 04:40:22.208666 kubelet[2951]: I0715 04:40:22.208556 2951 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 15 04:40:22.208666 kubelet[2951]: I0715 04:40:22.208591 2951 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 04:40:22.209013 kubelet[2951]: I0715 04:40:22.208990 2951 server.go:934] "Client rotation is on, will bootstrap in background" Jul 15 04:40:22.221539 kubelet[2951]: E0715 04:40:22.221479 2951 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.21:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:40:22.222214 kubelet[2951]: I0715 04:40:22.222113 2951 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 04:40:22.227302 kubelet[2951]: I0715 04:40:22.227281 2951 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 04:40:22.231440 kubelet[2951]: I0715 04:40:22.231209 2951 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 04:40:22.231671 kubelet[2951]: I0715 04:40:22.231647 2951 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 15 04:40:22.231800 kubelet[2951]: I0715 04:40:22.231777 2951 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 04:40:22.231951 kubelet[2951]: I0715 04:40:22.231799 2951 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4396.0.0-n-9104e8bf1a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 04:40:22.232029 kubelet[2951]: I0715 04:40:22.231961 2951 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 04:40:22.232029 kubelet[2951]: I0715 04:40:22.231969 2951 container_manager_linux.go:300] "Creating device plugin manager" Jul 15 04:40:22.232093 kubelet[2951]: I0715 04:40:22.232081 2951 state_mem.go:36] "Initialized new in-memory state store" Jul 15 04:40:22.233573 kubelet[2951]: I0715 04:40:22.233553 2951 kubelet.go:408] "Attempting to sync node with API server" Jul 15 04:40:22.233587 kubelet[2951]: I0715 04:40:22.233580 2951 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 04:40:22.233605 kubelet[2951]: I0715 04:40:22.233599 2951 kubelet.go:314] "Adding apiserver pod source" Jul 15 04:40:22.233626 kubelet[2951]: I0715 04:40:22.233619 2951 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 04:40:22.237920 kubelet[2951]: W0715 04:40:22.237611 2951 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4396.0.0-n-9104e8bf1a&limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Jul 15 04:40:22.237920 kubelet[2951]: E0715 04:40:22.237658 2951 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4396.0.0-n-9104e8bf1a&limit=500&resourceVersion=0\": dial tcp 10.200.20.21:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:40:22.238366 kubelet[2951]: W0715 04:40:22.238331 2951 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Jul 15 04:40:22.238466 kubelet[2951]: E0715 04:40:22.238453 2951 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.21:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:40:22.238594 kubelet[2951]: I0715 04:40:22.238581 2951 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 15 04:40:22.238996 kubelet[2951]: I0715 04:40:22.238979 2951 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 04:40:22.239103 kubelet[2951]: W0715 04:40:22.239094 2951 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 04:40:22.240093 kubelet[2951]: I0715 04:40:22.240073 2951 server.go:1274] "Started kubelet" Jul 15 04:40:22.240906 kubelet[2951]: I0715 04:40:22.240630 2951 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 04:40:22.241252 kubelet[2951]: I0715 04:40:22.241235 2951 server.go:449] "Adding debug handlers to kubelet server" Jul 15 04:40:22.242063 kubelet[2951]: I0715 04:40:22.242021 2951 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 04:40:22.242357 kubelet[2951]: I0715 04:40:22.242340 2951 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 04:40:22.242891 kubelet[2951]: I0715 04:40:22.242870 2951 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 04:40:22.244018 kubelet[2951]: E0715 04:40:22.243131 2951 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.21:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.21:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4396.0.0-n-9104e8bf1a.185252f62b28905b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4396.0.0-n-9104e8bf1a,UID:ci-4396.0.0-n-9104e8bf1a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4396.0.0-n-9104e8bf1a,},FirstTimestamp:2025-07-15 04:40:22.240055387 +0000 UTC m=+0.281755535,LastTimestamp:2025-07-15 04:40:22.240055387 +0000 UTC m=+0.281755535,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4396.0.0-n-9104e8bf1a,}" Jul 15 04:40:22.245036 kubelet[2951]: I0715 04:40:22.244552 2951 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 04:40:22.246267 kubelet[2951]: E0715 04:40:22.246253 2951 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-9104e8bf1a\" not found" Jul 15 04:40:22.246369 kubelet[2951]: I0715 04:40:22.246361 2951 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 15 04:40:22.246608 kubelet[2951]: I0715 04:40:22.246596 2951 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 15 04:40:22.246755 kubelet[2951]: I0715 04:40:22.246746 2951 reconciler.go:26] "Reconciler: start to sync state" Jul 15 04:40:22.247152 kubelet[2951]: W0715 04:40:22.247127 2951 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Jul 15 04:40:22.247262 kubelet[2951]: E0715 04:40:22.247247 2951 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.21:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:40:22.247578 kubelet[2951]: E0715 04:40:22.247554 2951 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4396.0.0-n-9104e8bf1a?timeout=10s\": dial tcp 10.200.20.21:6443: connect: connection refused" interval="200ms" Jul 15 04:40:22.249187 kubelet[2951]: I0715 04:40:22.249138 2951 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 04:40:22.250062 kubelet[2951]: E0715 04:40:22.249979 2951 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 04:40:22.250212 kubelet[2951]: I0715 04:40:22.250196 2951 factory.go:221] Registration of the containerd container factory successfully Jul 15 04:40:22.250744 kubelet[2951]: I0715 04:40:22.250259 2951 factory.go:221] Registration of the systemd container factory successfully Jul 15 04:40:22.272836 kubelet[2951]: I0715 04:40:22.272811 2951 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 15 04:40:22.272836 kubelet[2951]: I0715 04:40:22.272827 2951 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 15 04:40:22.272836 kubelet[2951]: I0715 04:40:22.272845 2951 state_mem.go:36] "Initialized new in-memory state store" Jul 15 04:40:22.346513 kubelet[2951]: E0715 04:40:22.346480 2951 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-9104e8bf1a\" not found" Jul 15 04:40:22.448074 kubelet[2951]: E0715 04:40:22.448013 2951 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4396.0.0-n-9104e8bf1a?timeout=10s\": dial tcp 10.200.20.21:6443: connect: connection refused" interval="400ms" Jul 15 04:40:22.448211 kubelet[2951]: E0715 04:40:22.448182 2951 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-9104e8bf1a\" not found" Jul 15 04:40:22.500579 kubelet[2951]: I0715 04:40:22.499642 2951 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 04:40:22.501275 kubelet[2951]: I0715 04:40:22.501239 2951 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 04:40:22.501275 kubelet[2951]: I0715 04:40:22.501268 2951 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 15 04:40:22.501352 kubelet[2951]: I0715 04:40:22.501288 2951 kubelet.go:2321] "Starting kubelet main sync loop" Jul 15 04:40:22.501352 kubelet[2951]: E0715 04:40:22.501327 2951 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 04:40:22.501890 kubelet[2951]: W0715 04:40:22.501866 2951 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Jul 15 04:40:22.501999 kubelet[2951]: E0715 04:40:22.501984 2951 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.21:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:40:22.540367 kubelet[2951]: I0715 04:40:22.540337 2951 policy_none.go:49] "None policy: Start" Jul 15 04:40:22.541193 kubelet[2951]: I0715 04:40:22.541178 2951 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 15 04:40:22.541303 kubelet[2951]: I0715 04:40:22.541296 2951 state_mem.go:35] "Initializing new in-memory state store" Jul 15 04:40:22.548621 kubelet[2951]: E0715 04:40:22.548600 2951 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-9104e8bf1a\" not found" Jul 15 04:40:22.555298 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 15 04:40:22.564817 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 15 04:40:22.567473 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 15 04:40:22.585735 kubelet[2951]: I0715 04:40:22.585673 2951 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 04:40:22.586033 kubelet[2951]: I0715 04:40:22.586016 2951 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 04:40:22.586129 kubelet[2951]: I0715 04:40:22.586097 2951 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 04:40:22.586396 kubelet[2951]: I0715 04:40:22.586379 2951 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 04:40:22.588984 kubelet[2951]: E0715 04:40:22.588892 2951 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4396.0.0-n-9104e8bf1a\" not found" Jul 15 04:40:22.611373 systemd[1]: Created slice kubepods-burstable-podd2c47066e52e2635315455d1028cb9cd.slice - libcontainer container kubepods-burstable-podd2c47066e52e2635315455d1028cb9cd.slice. Jul 15 04:40:22.625195 systemd[1]: Created slice kubepods-burstable-poda684775aed220d45a95d504fad4287fd.slice - libcontainer container kubepods-burstable-poda684775aed220d45a95d504fad4287fd.slice. Jul 15 04:40:22.639428 systemd[1]: Created slice kubepods-burstable-podccd36bc283a7316ecbf8c3553786b9ee.slice - libcontainer container kubepods-burstable-podccd36bc283a7316ecbf8c3553786b9ee.slice. Jul 15 04:40:22.649179 kubelet[2951]: I0715 04:40:22.649068 2951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a684775aed220d45a95d504fad4287fd-k8s-certs\") pod \"kube-controller-manager-ci-4396.0.0-n-9104e8bf1a\" (UID: \"a684775aed220d45a95d504fad4287fd\") " pod="kube-system/kube-controller-manager-ci-4396.0.0-n-9104e8bf1a" Jul 15 04:40:22.649179 kubelet[2951]: I0715 04:40:22.649103 2951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ccd36bc283a7316ecbf8c3553786b9ee-kubeconfig\") pod \"kube-scheduler-ci-4396.0.0-n-9104e8bf1a\" (UID: \"ccd36bc283a7316ecbf8c3553786b9ee\") " pod="kube-system/kube-scheduler-ci-4396.0.0-n-9104e8bf1a" Jul 15 04:40:22.649179 kubelet[2951]: I0715 04:40:22.649119 2951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d2c47066e52e2635315455d1028cb9cd-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4396.0.0-n-9104e8bf1a\" (UID: \"d2c47066e52e2635315455d1028cb9cd\") " pod="kube-system/kube-apiserver-ci-4396.0.0-n-9104e8bf1a" Jul 15 04:40:22.649179 kubelet[2951]: I0715 04:40:22.649129 2951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a684775aed220d45a95d504fad4287fd-ca-certs\") pod \"kube-controller-manager-ci-4396.0.0-n-9104e8bf1a\" (UID: \"a684775aed220d45a95d504fad4287fd\") " pod="kube-system/kube-controller-manager-ci-4396.0.0-n-9104e8bf1a" Jul 15 04:40:22.649179 kubelet[2951]: I0715 04:40:22.649147 2951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a684775aed220d45a95d504fad4287fd-flexvolume-dir\") pod \"kube-controller-manager-ci-4396.0.0-n-9104e8bf1a\" (UID: \"a684775aed220d45a95d504fad4287fd\") " pod="kube-system/kube-controller-manager-ci-4396.0.0-n-9104e8bf1a" Jul 15 04:40:22.649394 kubelet[2951]: I0715 04:40:22.649166 2951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a684775aed220d45a95d504fad4287fd-kubeconfig\") pod \"kube-controller-manager-ci-4396.0.0-n-9104e8bf1a\" (UID: \"a684775aed220d45a95d504fad4287fd\") " pod="kube-system/kube-controller-manager-ci-4396.0.0-n-9104e8bf1a" Jul 15 04:40:22.649394 kubelet[2951]: I0715 04:40:22.649176 2951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a684775aed220d45a95d504fad4287fd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4396.0.0-n-9104e8bf1a\" (UID: \"a684775aed220d45a95d504fad4287fd\") " pod="kube-system/kube-controller-manager-ci-4396.0.0-n-9104e8bf1a" Jul 15 04:40:22.649394 kubelet[2951]: I0715 04:40:22.649188 2951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d2c47066e52e2635315455d1028cb9cd-ca-certs\") pod \"kube-apiserver-ci-4396.0.0-n-9104e8bf1a\" (UID: \"d2c47066e52e2635315455d1028cb9cd\") " pod="kube-system/kube-apiserver-ci-4396.0.0-n-9104e8bf1a" Jul 15 04:40:22.649394 kubelet[2951]: I0715 04:40:22.649198 2951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d2c47066e52e2635315455d1028cb9cd-k8s-certs\") pod \"kube-apiserver-ci-4396.0.0-n-9104e8bf1a\" (UID: \"d2c47066e52e2635315455d1028cb9cd\") " pod="kube-system/kube-apiserver-ci-4396.0.0-n-9104e8bf1a" Jul 15 04:40:22.677370 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jul 15 04:40:22.688139 kubelet[2951]: I0715 04:40:22.688112 2951 kubelet_node_status.go:72] "Attempting to register node" node="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:40:22.688551 kubelet[2951]: E0715 04:40:22.688510 2951 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.21:6443/api/v1/nodes\": dial tcp 10.200.20.21:6443: connect: connection refused" node="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:40:22.849056 kubelet[2951]: E0715 04:40:22.848906 2951 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4396.0.0-n-9104e8bf1a?timeout=10s\": dial tcp 10.200.20.21:6443: connect: connection refused" interval="800ms" Jul 15 04:40:22.890922 kubelet[2951]: I0715 04:40:22.890867 2951 kubelet_node_status.go:72] "Attempting to register node" node="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:40:22.891245 kubelet[2951]: E0715 04:40:22.891216 2951 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.21:6443/api/v1/nodes\": dial tcp 10.200.20.21:6443: connect: connection refused" node="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:40:22.924345 containerd[1889]: time="2025-07-15T04:40:22.924242731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4396.0.0-n-9104e8bf1a,Uid:d2c47066e52e2635315455d1028cb9cd,Namespace:kube-system,Attempt:0,}" Jul 15 04:40:22.937919 containerd[1889]: time="2025-07-15T04:40:22.937871847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4396.0.0-n-9104e8bf1a,Uid:a684775aed220d45a95d504fad4287fd,Namespace:kube-system,Attempt:0,}" Jul 15 04:40:22.941900 containerd[1889]: time="2025-07-15T04:40:22.941761254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4396.0.0-n-9104e8bf1a,Uid:ccd36bc283a7316ecbf8c3553786b9ee,Namespace:kube-system,Attempt:0,}" Jul 15 04:40:23.072113 containerd[1889]: time="2025-07-15T04:40:23.072073322Z" level=info msg="connecting to shim d8536136a24ad5bde5a71f8262ac61d6e1753567307fa7382dae93c759b45bbf" address="unix:///run/containerd/s/f396207895228e47146f00a977673d06c72fa9e73de1fb3b15a04237093a5d73" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:40:23.097880 systemd[1]: Started cri-containerd-d8536136a24ad5bde5a71f8262ac61d6e1753567307fa7382dae93c759b45bbf.scope - libcontainer container d8536136a24ad5bde5a71f8262ac61d6e1753567307fa7382dae93c759b45bbf. Jul 15 04:40:23.114629 containerd[1889]: time="2025-07-15T04:40:23.114315677Z" level=info msg="connecting to shim e9284913f0e723aa5e5cdc5ef0e97274ca255820deda672ed8e0c9f3dccbc114" address="unix:///run/containerd/s/72104b11c38cb8fbb96a6635b14dad94a42cf4e46a6e0773a422f3c0040b32d3" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:40:23.115324 containerd[1889]: time="2025-07-15T04:40:23.115222188Z" level=info msg="connecting to shim e87d286be8528333d0effaa63b25023ccbf7f6c7597d1dc473cd35879a4b641c" address="unix:///run/containerd/s/a8efa3b07f841bc57057197f575dff8a602eeab7278e438d6871dcc124b25b9e" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:40:23.135976 systemd[1]: Started cri-containerd-e9284913f0e723aa5e5cdc5ef0e97274ca255820deda672ed8e0c9f3dccbc114.scope - libcontainer container e9284913f0e723aa5e5cdc5ef0e97274ca255820deda672ed8e0c9f3dccbc114. Jul 15 04:40:23.142120 systemd[1]: Started cri-containerd-e87d286be8528333d0effaa63b25023ccbf7f6c7597d1dc473cd35879a4b641c.scope - libcontainer container e87d286be8528333d0effaa63b25023ccbf7f6c7597d1dc473cd35879a4b641c. Jul 15 04:40:23.166489 containerd[1889]: time="2025-07-15T04:40:23.166444657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4396.0.0-n-9104e8bf1a,Uid:d2c47066e52e2635315455d1028cb9cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8536136a24ad5bde5a71f8262ac61d6e1753567307fa7382dae93c759b45bbf\"" Jul 15 04:40:23.167275 kubelet[2951]: W0715 04:40:23.166344 2951 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4396.0.0-n-9104e8bf1a&limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Jul 15 04:40:23.167275 kubelet[2951]: E0715 04:40:23.167230 2951 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4396.0.0-n-9104e8bf1a&limit=500&resourceVersion=0\": dial tcp 10.200.20.21:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:40:23.172710 containerd[1889]: time="2025-07-15T04:40:23.172667774Z" level=info msg="CreateContainer within sandbox \"d8536136a24ad5bde5a71f8262ac61d6e1753567307fa7382dae93c759b45bbf\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 04:40:23.192122 containerd[1889]: time="2025-07-15T04:40:23.192087509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4396.0.0-n-9104e8bf1a,Uid:ccd36bc283a7316ecbf8c3553786b9ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9284913f0e723aa5e5cdc5ef0e97274ca255820deda672ed8e0c9f3dccbc114\"" Jul 15 04:40:23.195733 containerd[1889]: time="2025-07-15T04:40:23.195689480Z" level=info msg="CreateContainer within sandbox \"e9284913f0e723aa5e5cdc5ef0e97274ca255820deda672ed8e0c9f3dccbc114\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 04:40:23.214570 containerd[1889]: time="2025-07-15T04:40:23.214517163Z" level=info msg="Container d73b3aa23528dc4d8ce0c03ce68fb9caea41f242e4bf9607de4f795c5e43832c: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:40:23.219795 containerd[1889]: time="2025-07-15T04:40:23.219743814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4396.0.0-n-9104e8bf1a,Uid:a684775aed220d45a95d504fad4287fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"e87d286be8528333d0effaa63b25023ccbf7f6c7597d1dc473cd35879a4b641c\"" Jul 15 04:40:23.221671 containerd[1889]: time="2025-07-15T04:40:23.221642367Z" level=info msg="CreateContainer within sandbox \"e87d286be8528333d0effaa63b25023ccbf7f6c7597d1dc473cd35879a4b641c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 04:40:23.241963 containerd[1889]: time="2025-07-15T04:40:23.241908147Z" level=info msg="Container 1b23423a9e3f7a6bfe821b89ac05de3b7fda93766691a645c2e76e909b135380: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:40:23.253889 update_engine[1869]: I20250715 04:40:23.253822 1869 update_attempter.cc:509] Updating boot flags... Jul 15 04:40:23.295098 kubelet[2951]: I0715 04:40:23.294894 2951 kubelet_node_status.go:72] "Attempting to register node" node="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:40:23.295384 kubelet[2951]: E0715 04:40:23.295338 2951 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.21:6443/api/v1/nodes\": dial tcp 10.200.20.21:6443: connect: connection refused" node="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:40:23.649441 kubelet[2951]: E0715 04:40:23.649387 2951 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4396.0.0-n-9104e8bf1a?timeout=10s\": dial tcp 10.200.20.21:6443: connect: connection refused" interval="1.6s" Jul 15 04:40:23.762659 kubelet[2951]: W0715 04:40:23.762576 2951 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Jul 15 04:40:23.762659 kubelet[2951]: E0715 04:40:23.762658 2951 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.21:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:40:23.833054 kubelet[2951]: W0715 04:40:23.833011 2951 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Jul 15 04:40:23.833054 kubelet[2951]: E0715 04:40:23.833059 2951 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.21:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:40:24.079965 kubelet[2951]: W0715 04:40:23.836481 2951 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Jul 15 04:40:24.079965 kubelet[2951]: E0715 04:40:23.836531 2951 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.21:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:40:24.083067 containerd[1889]: time="2025-07-15T04:40:24.083023821Z" level=info msg="CreateContainer within sandbox \"d8536136a24ad5bde5a71f8262ac61d6e1753567307fa7382dae93c759b45bbf\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d73b3aa23528dc4d8ce0c03ce68fb9caea41f242e4bf9607de4f795c5e43832c\"" Jul 15 04:40:24.083825 containerd[1889]: time="2025-07-15T04:40:24.083778631Z" level=info msg="StartContainer for \"d73b3aa23528dc4d8ce0c03ce68fb9caea41f242e4bf9607de4f795c5e43832c\"" Jul 15 04:40:24.085122 containerd[1889]: time="2025-07-15T04:40:24.085083572Z" level=info msg="connecting to shim d73b3aa23528dc4d8ce0c03ce68fb9caea41f242e4bf9607de4f795c5e43832c" address="unix:///run/containerd/s/f396207895228e47146f00a977673d06c72fa9e73de1fb3b15a04237093a5d73" protocol=ttrpc version=3 Jul 15 04:40:24.099143 kubelet[2951]: I0715 04:40:24.099110 2951 kubelet_node_status.go:72] "Attempting to register node" node="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:40:24.099914 kubelet[2951]: E0715 04:40:24.099864 2951 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.21:6443/api/v1/nodes\": dial tcp 10.200.20.21:6443: connect: connection refused" node="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:40:24.102856 systemd[1]: Started cri-containerd-d73b3aa23528dc4d8ce0c03ce68fb9caea41f242e4bf9607de4f795c5e43832c.scope - libcontainer container d73b3aa23528dc4d8ce0c03ce68fb9caea41f242e4bf9607de4f795c5e43832c. Jul 15 04:40:24.145720 containerd[1889]: time="2025-07-15T04:40:24.145670105Z" level=info msg="StartContainer for \"d73b3aa23528dc4d8ce0c03ce68fb9caea41f242e4bf9607de4f795c5e43832c\" returns successfully" Jul 15 04:40:24.147196 containerd[1889]: time="2025-07-15T04:40:24.147161252Z" level=info msg="CreateContainer within sandbox \"e9284913f0e723aa5e5cdc5ef0e97274ca255820deda672ed8e0c9f3dccbc114\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1b23423a9e3f7a6bfe821b89ac05de3b7fda93766691a645c2e76e909b135380\"" Jul 15 04:40:24.147536 containerd[1889]: time="2025-07-15T04:40:24.147513920Z" level=info msg="StartContainer for \"1b23423a9e3f7a6bfe821b89ac05de3b7fda93766691a645c2e76e909b135380\"" Jul 15 04:40:24.150593 containerd[1889]: time="2025-07-15T04:40:24.150563712Z" level=info msg="connecting to shim 1b23423a9e3f7a6bfe821b89ac05de3b7fda93766691a645c2e76e909b135380" address="unix:///run/containerd/s/72104b11c38cb8fbb96a6635b14dad94a42cf4e46a6e0773a422f3c0040b32d3" protocol=ttrpc version=3 Jul 15 04:40:24.156042 containerd[1889]: time="2025-07-15T04:40:24.156011162Z" level=info msg="Container 135d5759bfaf99fc57faf5b9e862e9868ccf1c40a094cf0b145779e21343f09c: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:40:24.170871 systemd[1]: Started cri-containerd-1b23423a9e3f7a6bfe821b89ac05de3b7fda93766691a645c2e76e909b135380.scope - libcontainer container 1b23423a9e3f7a6bfe821b89ac05de3b7fda93766691a645c2e76e909b135380. Jul 15 04:40:24.185523 containerd[1889]: time="2025-07-15T04:40:24.185414119Z" level=info msg="CreateContainer within sandbox \"e87d286be8528333d0effaa63b25023ccbf7f6c7597d1dc473cd35879a4b641c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"135d5759bfaf99fc57faf5b9e862e9868ccf1c40a094cf0b145779e21343f09c\"" Jul 15 04:40:24.186243 containerd[1889]: time="2025-07-15T04:40:24.186217810Z" level=info msg="StartContainer for \"135d5759bfaf99fc57faf5b9e862e9868ccf1c40a094cf0b145779e21343f09c\"" Jul 15 04:40:24.188304 containerd[1889]: time="2025-07-15T04:40:24.188275648Z" level=info msg="connecting to shim 135d5759bfaf99fc57faf5b9e862e9868ccf1c40a094cf0b145779e21343f09c" address="unix:///run/containerd/s/a8efa3b07f841bc57057197f575dff8a602eeab7278e438d6871dcc124b25b9e" protocol=ttrpc version=3 Jul 15 04:40:24.209046 systemd[1]: Started cri-containerd-135d5759bfaf99fc57faf5b9e862e9868ccf1c40a094cf0b145779e21343f09c.scope - libcontainer container 135d5759bfaf99fc57faf5b9e862e9868ccf1c40a094cf0b145779e21343f09c. Jul 15 04:40:24.227878 containerd[1889]: time="2025-07-15T04:40:24.227775326Z" level=info msg="StartContainer for \"1b23423a9e3f7a6bfe821b89ac05de3b7fda93766691a645c2e76e909b135380\" returns successfully" Jul 15 04:40:24.274730 containerd[1889]: time="2025-07-15T04:40:24.274620798Z" level=info msg="StartContainer for \"135d5759bfaf99fc57faf5b9e862e9868ccf1c40a094cf0b145779e21343f09c\" returns successfully" Jul 15 04:40:25.303775 kubelet[2951]: E0715 04:40:25.303726 2951 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4396.0.0-n-9104e8bf1a\" not found" node="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:40:25.631264 kubelet[2951]: E0715 04:40:25.631217 2951 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4396.0.0-n-9104e8bf1a" not found Jul 15 04:40:25.702144 kubelet[2951]: I0715 04:40:25.702097 2951 kubelet_node_status.go:72] "Attempting to register node" node="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:40:25.713849 kubelet[2951]: I0715 04:40:25.713816 2951 kubelet_node_status.go:75] "Successfully registered node" node="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:40:25.713849 kubelet[2951]: E0715 04:40:25.713850 2951 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4396.0.0-n-9104e8bf1a\": node \"ci-4396.0.0-n-9104e8bf1a\" not found" Jul 15 04:40:25.721633 kubelet[2951]: E0715 04:40:25.721601 2951 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-9104e8bf1a\" not found" Jul 15 04:40:25.822744 kubelet[2951]: E0715 04:40:25.822693 2951 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-9104e8bf1a\" not found" Jul 15 04:40:25.923258 kubelet[2951]: E0715 04:40:25.923135 2951 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-9104e8bf1a\" not found" Jul 15 04:40:26.023784 kubelet[2951]: E0715 04:40:26.023664 2951 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-9104e8bf1a\" not found" Jul 15 04:40:26.124804 kubelet[2951]: E0715 04:40:26.124740 2951 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-9104e8bf1a\" not found" Jul 15 04:40:26.225644 kubelet[2951]: E0715 04:40:26.225497 2951 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-9104e8bf1a\" not found" Jul 15 04:40:26.545513 kubelet[2951]: W0715 04:40:26.545394 2951 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 15 04:40:27.241158 kubelet[2951]: I0715 04:40:27.241076 2951 apiserver.go:52] "Watching apiserver" Jul 15 04:40:27.247835 kubelet[2951]: I0715 04:40:27.247790 2951 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 15 04:40:27.539047 kubelet[2951]: W0715 04:40:27.538921 2951 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 15 04:40:27.539047 kubelet[2951]: E0715 04:40:27.538995 2951 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4396.0.0-n-9104e8bf1a\" already exists" pod="kube-system/kube-apiserver-ci-4396.0.0-n-9104e8bf1a" Jul 15 04:40:28.016754 systemd[1]: Reload requested from client PID 3288 ('systemctl') (unit session-9.scope)... Jul 15 04:40:28.017039 systemd[1]: Reloading... Jul 15 04:40:28.059184 kubelet[2951]: W0715 04:40:28.059147 2951 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 15 04:40:28.100760 zram_generator::config[3334]: No configuration found. Jul 15 04:40:28.169327 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 04:40:28.262842 systemd[1]: Reloading finished in 245 ms. Jul 15 04:40:28.285358 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:40:28.298106 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 04:40:28.298445 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:40:28.298570 systemd[1]: kubelet.service: Consumed 545ms CPU time, 124.7M memory peak. Jul 15 04:40:28.300576 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:40:28.405803 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:40:28.414043 (kubelet)[3397]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 04:40:28.437752 kubelet[3397]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 04:40:28.437752 kubelet[3397]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 15 04:40:28.437752 kubelet[3397]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 04:40:28.438442 kubelet[3397]: I0715 04:40:28.438395 3397 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 04:40:28.443142 kubelet[3397]: I0715 04:40:28.443107 3397 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 15 04:40:28.443244 kubelet[3397]: I0715 04:40:28.443235 3397 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 04:40:28.443455 kubelet[3397]: I0715 04:40:28.443439 3397 server.go:934] "Client rotation is on, will bootstrap in background" Jul 15 04:40:28.444499 kubelet[3397]: I0715 04:40:28.444475 3397 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 15 04:40:28.446038 kubelet[3397]: I0715 04:40:28.445899 3397 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 04:40:28.449066 kubelet[3397]: I0715 04:40:28.449021 3397 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 04:40:28.452738 kubelet[3397]: I0715 04:40:28.451845 3397 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 04:40:28.452738 kubelet[3397]: I0715 04:40:28.451957 3397 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 15 04:40:28.452738 kubelet[3397]: I0715 04:40:28.452034 3397 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 04:40:28.452738 kubelet[3397]: I0715 04:40:28.452050 3397 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4396.0.0-n-9104e8bf1a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 04:40:28.452897 kubelet[3397]: I0715 04:40:28.452222 3397 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 04:40:28.452897 kubelet[3397]: I0715 04:40:28.452229 3397 container_manager_linux.go:300] "Creating device plugin manager" Jul 15 04:40:28.452897 kubelet[3397]: I0715 04:40:28.452255 3397 state_mem.go:36] "Initialized new in-memory state store" Jul 15 04:40:28.452897 kubelet[3397]: I0715 04:40:28.452333 3397 kubelet.go:408] "Attempting to sync node with API server" Jul 15 04:40:28.452897 kubelet[3397]: I0715 04:40:28.452342 3397 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 04:40:28.452897 kubelet[3397]: I0715 04:40:28.452357 3397 kubelet.go:314] "Adding apiserver pod source" Jul 15 04:40:28.452897 kubelet[3397]: I0715 04:40:28.452368 3397 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 04:40:28.456609 kubelet[3397]: I0715 04:40:28.456592 3397 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 15 04:40:28.457010 kubelet[3397]: I0715 04:40:28.456993 3397 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 04:40:28.457390 kubelet[3397]: I0715 04:40:28.457372 3397 server.go:1274] "Started kubelet" Jul 15 04:40:28.458775 kubelet[3397]: I0715 04:40:28.458757 3397 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 04:40:28.461466 kubelet[3397]: I0715 04:40:28.461416 3397 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 04:40:28.462966 kubelet[3397]: I0715 04:40:28.462938 3397 server.go:449] "Adding debug handlers to kubelet server" Jul 15 04:40:28.463574 kubelet[3397]: I0715 04:40:28.463525 3397 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 04:40:28.463702 kubelet[3397]: I0715 04:40:28.463689 3397 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 15 04:40:28.463956 kubelet[3397]: E0715 04:40:28.463937 3397 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4396.0.0-n-9104e8bf1a\" not found" Jul 15 04:40:28.466567 kubelet[3397]: I0715 04:40:28.465476 3397 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 04:40:28.466751 kubelet[3397]: I0715 04:40:28.466736 3397 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 15 04:40:28.466823 kubelet[3397]: I0715 04:40:28.463725 3397 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 04:40:28.466994 kubelet[3397]: I0715 04:40:28.466983 3397 reconciler.go:26] "Reconciler: start to sync state" Jul 15 04:40:28.470113 kubelet[3397]: I0715 04:40:28.470088 3397 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 04:40:28.471137 kubelet[3397]: I0715 04:40:28.471117 3397 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 04:40:28.471233 kubelet[3397]: I0715 04:40:28.471224 3397 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 15 04:40:28.471300 kubelet[3397]: I0715 04:40:28.471292 3397 kubelet.go:2321] "Starting kubelet main sync loop" Jul 15 04:40:28.471386 kubelet[3397]: E0715 04:40:28.471370 3397 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 04:40:28.479203 kubelet[3397]: I0715 04:40:28.479167 3397 factory.go:221] Registration of the systemd container factory successfully Jul 15 04:40:28.480117 kubelet[3397]: I0715 04:40:28.480065 3397 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 04:40:28.484846 kubelet[3397]: E0715 04:40:28.484814 3397 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 04:40:28.485406 kubelet[3397]: I0715 04:40:28.485376 3397 factory.go:221] Registration of the containerd container factory successfully Jul 15 04:40:28.526653 kubelet[3397]: I0715 04:40:28.526627 3397 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 15 04:40:28.526653 kubelet[3397]: I0715 04:40:28.526644 3397 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 15 04:40:28.526653 kubelet[3397]: I0715 04:40:28.526662 3397 state_mem.go:36] "Initialized new in-memory state store" Jul 15 04:40:28.526916 kubelet[3397]: I0715 04:40:28.526864 3397 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 04:40:28.526916 kubelet[3397]: I0715 04:40:28.526877 3397 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 04:40:28.526916 kubelet[3397]: I0715 04:40:28.526893 3397 policy_none.go:49] "None policy: Start" Jul 15 04:40:28.527579 kubelet[3397]: I0715 04:40:28.527562 3397 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 15 04:40:28.527579 kubelet[3397]: I0715 04:40:28.527583 3397 state_mem.go:35] "Initializing new in-memory state store" Jul 15 04:40:28.527756 kubelet[3397]: I0715 04:40:28.527744 3397 state_mem.go:75] "Updated machine memory state" Jul 15 04:40:28.531688 kubelet[3397]: I0715 04:40:28.531632 3397 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 04:40:28.531926 kubelet[3397]: I0715 04:40:28.531806 3397 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 04:40:28.533826 kubelet[3397]: I0715 04:40:28.531819 3397 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 04:40:28.533826 kubelet[3397]: I0715 04:40:28.533578 3397 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 04:40:28.578213 kubelet[3397]: W0715 04:40:28.578096 3397 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 15 04:40:28.586438 kubelet[3397]: W0715 04:40:28.586387 3397 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 15 04:40:28.586572 kubelet[3397]: E0715 04:40:28.586478 3397 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4396.0.0-n-9104e8bf1a\" already exists" pod="kube-system/kube-apiserver-ci-4396.0.0-n-9104e8bf1a" Jul 15 04:40:28.587005 kubelet[3397]: W0715 04:40:28.586983 3397 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 15 04:40:28.587077 kubelet[3397]: E0715 04:40:28.587019 3397 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4396.0.0-n-9104e8bf1a\" already exists" pod="kube-system/kube-controller-manager-ci-4396.0.0-n-9104e8bf1a" Jul 15 04:40:28.636979 kubelet[3397]: I0715 04:40:28.636856 3397 kubelet_node_status.go:72] "Attempting to register node" node="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:40:28.649869 kubelet[3397]: I0715 04:40:28.649842 3397 kubelet_node_status.go:111] "Node was previously registered" node="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:40:28.649967 kubelet[3397]: I0715 04:40:28.649904 3397 kubelet_node_status.go:75] "Successfully registered node" node="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:40:28.667898 kubelet[3397]: I0715 04:40:28.667734 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a684775aed220d45a95d504fad4287fd-ca-certs\") pod \"kube-controller-manager-ci-4396.0.0-n-9104e8bf1a\" (UID: \"a684775aed220d45a95d504fad4287fd\") " pod="kube-system/kube-controller-manager-ci-4396.0.0-n-9104e8bf1a" Jul 15 04:40:28.667898 kubelet[3397]: I0715 04:40:28.667768 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a684775aed220d45a95d504fad4287fd-k8s-certs\") pod \"kube-controller-manager-ci-4396.0.0-n-9104e8bf1a\" (UID: \"a684775aed220d45a95d504fad4287fd\") " pod="kube-system/kube-controller-manager-ci-4396.0.0-n-9104e8bf1a" Jul 15 04:40:28.667898 kubelet[3397]: I0715 04:40:28.667781 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a684775aed220d45a95d504fad4287fd-kubeconfig\") pod \"kube-controller-manager-ci-4396.0.0-n-9104e8bf1a\" (UID: \"a684775aed220d45a95d504fad4287fd\") " pod="kube-system/kube-controller-manager-ci-4396.0.0-n-9104e8bf1a" Jul 15 04:40:28.667898 kubelet[3397]: I0715 04:40:28.667798 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a684775aed220d45a95d504fad4287fd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4396.0.0-n-9104e8bf1a\" (UID: \"a684775aed220d45a95d504fad4287fd\") " pod="kube-system/kube-controller-manager-ci-4396.0.0-n-9104e8bf1a" Jul 15 04:40:28.667898 kubelet[3397]: I0715 04:40:28.667811 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d2c47066e52e2635315455d1028cb9cd-ca-certs\") pod \"kube-apiserver-ci-4396.0.0-n-9104e8bf1a\" (UID: \"d2c47066e52e2635315455d1028cb9cd\") " pod="kube-system/kube-apiserver-ci-4396.0.0-n-9104e8bf1a" Jul 15 04:40:28.668079 kubelet[3397]: I0715 04:40:28.667845 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d2c47066e52e2635315455d1028cb9cd-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4396.0.0-n-9104e8bf1a\" (UID: \"d2c47066e52e2635315455d1028cb9cd\") " pod="kube-system/kube-apiserver-ci-4396.0.0-n-9104e8bf1a" Jul 15 04:40:28.668079 kubelet[3397]: I0715 04:40:28.667857 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a684775aed220d45a95d504fad4287fd-flexvolume-dir\") pod \"kube-controller-manager-ci-4396.0.0-n-9104e8bf1a\" (UID: \"a684775aed220d45a95d504fad4287fd\") " pod="kube-system/kube-controller-manager-ci-4396.0.0-n-9104e8bf1a" Jul 15 04:40:28.668079 kubelet[3397]: I0715 04:40:28.667866 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ccd36bc283a7316ecbf8c3553786b9ee-kubeconfig\") pod \"kube-scheduler-ci-4396.0.0-n-9104e8bf1a\" (UID: \"ccd36bc283a7316ecbf8c3553786b9ee\") " pod="kube-system/kube-scheduler-ci-4396.0.0-n-9104e8bf1a" Jul 15 04:40:28.668079 kubelet[3397]: I0715 04:40:28.667893 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d2c47066e52e2635315455d1028cb9cd-k8s-certs\") pod \"kube-apiserver-ci-4396.0.0-n-9104e8bf1a\" (UID: \"d2c47066e52e2635315455d1028cb9cd\") " pod="kube-system/kube-apiserver-ci-4396.0.0-n-9104e8bf1a" Jul 15 04:40:29.456749 kubelet[3397]: I0715 04:40:29.456647 3397 apiserver.go:52] "Watching apiserver" Jul 15 04:40:29.468044 kubelet[3397]: I0715 04:40:29.467998 3397 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 15 04:40:29.530632 kubelet[3397]: W0715 04:40:29.530561 3397 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 15 04:40:29.530968 kubelet[3397]: E0715 04:40:29.530833 3397 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4396.0.0-n-9104e8bf1a\" already exists" pod="kube-system/kube-apiserver-ci-4396.0.0-n-9104e8bf1a" Jul 15 04:40:29.533945 kubelet[3397]: W0715 04:40:29.533694 3397 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 15 04:40:29.534210 kubelet[3397]: E0715 04:40:29.534184 3397 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4396.0.0-n-9104e8bf1a\" already exists" pod="kube-system/kube-scheduler-ci-4396.0.0-n-9104e8bf1a" Jul 15 04:40:29.545813 kubelet[3397]: I0715 04:40:29.545674 3397 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4396.0.0-n-9104e8bf1a" podStartSLOduration=3.545658249 podStartE2EDuration="3.545658249s" podCreationTimestamp="2025-07-15 04:40:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:40:29.53401078 +0000 UTC m=+1.117212023" watchObservedRunningTime="2025-07-15 04:40:29.545658249 +0000 UTC m=+1.128859492" Jul 15 04:40:29.555237 kubelet[3397]: I0715 04:40:29.555172 3397 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4396.0.0-n-9104e8bf1a" podStartSLOduration=1.555154661 podStartE2EDuration="1.555154661s" podCreationTimestamp="2025-07-15 04:40:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:40:29.546021893 +0000 UTC m=+1.129223136" watchObservedRunningTime="2025-07-15 04:40:29.555154661 +0000 UTC m=+1.138355912" Jul 15 04:40:29.555375 kubelet[3397]: I0715 04:40:29.555256 3397 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4396.0.0-n-9104e8bf1a" podStartSLOduration=1.5552520159999998 podStartE2EDuration="1.555252016s" podCreationTimestamp="2025-07-15 04:40:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:40:29.555001776 +0000 UTC m=+1.138203019" watchObservedRunningTime="2025-07-15 04:40:29.555252016 +0000 UTC m=+1.138453259" Jul 15 04:40:33.287072 kubelet[3397]: I0715 04:40:33.287034 3397 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 04:40:33.287806 containerd[1889]: time="2025-07-15T04:40:33.287335447Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 04:40:33.288392 kubelet[3397]: I0715 04:40:33.288044 3397 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 04:40:34.059517 systemd[1]: Created slice kubepods-besteffort-podb176fb56_4d7b_4439_b49f_af5f87c35c07.slice - libcontainer container kubepods-besteffort-podb176fb56_4d7b_4439_b49f_af5f87c35c07.slice. Jul 15 04:40:34.196398 kubelet[3397]: I0715 04:40:34.196354 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b176fb56-4d7b-4439-b49f-af5f87c35c07-kube-proxy\") pod \"kube-proxy-58jwr\" (UID: \"b176fb56-4d7b-4439-b49f-af5f87c35c07\") " pod="kube-system/kube-proxy-58jwr" Jul 15 04:40:34.196398 kubelet[3397]: I0715 04:40:34.196392 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b176fb56-4d7b-4439-b49f-af5f87c35c07-lib-modules\") pod \"kube-proxy-58jwr\" (UID: \"b176fb56-4d7b-4439-b49f-af5f87c35c07\") " pod="kube-system/kube-proxy-58jwr" Jul 15 04:40:34.196398 kubelet[3397]: I0715 04:40:34.196407 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b176fb56-4d7b-4439-b49f-af5f87c35c07-xtables-lock\") pod \"kube-proxy-58jwr\" (UID: \"b176fb56-4d7b-4439-b49f-af5f87c35c07\") " pod="kube-system/kube-proxy-58jwr" Jul 15 04:40:34.196398 kubelet[3397]: I0715 04:40:34.196418 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9447\" (UniqueName: \"kubernetes.io/projected/b176fb56-4d7b-4439-b49f-af5f87c35c07-kube-api-access-h9447\") pod \"kube-proxy-58jwr\" (UID: \"b176fb56-4d7b-4439-b49f-af5f87c35c07\") " pod="kube-system/kube-proxy-58jwr" Jul 15 04:40:34.367938 containerd[1889]: time="2025-07-15T04:40:34.367890882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-58jwr,Uid:b176fb56-4d7b-4439-b49f-af5f87c35c07,Namespace:kube-system,Attempt:0,}" Jul 15 04:40:34.400221 systemd[1]: Created slice kubepods-besteffort-pode64aa932_d577_4a4b_8dcc_5e23f244ee64.slice - libcontainer container kubepods-besteffort-pode64aa932_d577_4a4b_8dcc_5e23f244ee64.slice. Jul 15 04:40:34.430976 containerd[1889]: time="2025-07-15T04:40:34.430923848Z" level=info msg="connecting to shim 6b69572c0ee7a0407943e2e2c0d09e76418c355bcb7b9d5ff9704cb50d097b9d" address="unix:///run/containerd/s/6951ae6e5bbca0233bc5bbdcb838705f3d89aefc954ac8da27939e51954548c7" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:40:34.451891 systemd[1]: Started cri-containerd-6b69572c0ee7a0407943e2e2c0d09e76418c355bcb7b9d5ff9704cb50d097b9d.scope - libcontainer container 6b69572c0ee7a0407943e2e2c0d09e76418c355bcb7b9d5ff9704cb50d097b9d. Jul 15 04:40:34.477607 containerd[1889]: time="2025-07-15T04:40:34.477537991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-58jwr,Uid:b176fb56-4d7b-4439-b49f-af5f87c35c07,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b69572c0ee7a0407943e2e2c0d09e76418c355bcb7b9d5ff9704cb50d097b9d\"" Jul 15 04:40:34.481251 containerd[1889]: time="2025-07-15T04:40:34.481217754Z" level=info msg="CreateContainer within sandbox \"6b69572c0ee7a0407943e2e2c0d09e76418c355bcb7b9d5ff9704cb50d097b9d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 04:40:34.497431 kubelet[3397]: I0715 04:40:34.497384 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e64aa932-d577-4a4b-8dcc-5e23f244ee64-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-7nbf7\" (UID: \"e64aa932-d577-4a4b-8dcc-5e23f244ee64\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-7nbf7" Jul 15 04:40:34.497864 kubelet[3397]: I0715 04:40:34.497825 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkhfl\" (UniqueName: \"kubernetes.io/projected/e64aa932-d577-4a4b-8dcc-5e23f244ee64-kube-api-access-mkhfl\") pod \"tigera-operator-5bf8dfcb4-7nbf7\" (UID: \"e64aa932-d577-4a4b-8dcc-5e23f244ee64\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-7nbf7" Jul 15 04:40:34.518747 containerd[1889]: time="2025-07-15T04:40:34.518487910Z" level=info msg="Container d92963e0d0343591f679c4978a7e22d5e9e75ebcfc57b6119e47e9d15b77957b: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:40:34.541577 containerd[1889]: time="2025-07-15T04:40:34.541522083Z" level=info msg="CreateContainer within sandbox \"6b69572c0ee7a0407943e2e2c0d09e76418c355bcb7b9d5ff9704cb50d097b9d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d92963e0d0343591f679c4978a7e22d5e9e75ebcfc57b6119e47e9d15b77957b\"" Jul 15 04:40:34.544750 containerd[1889]: time="2025-07-15T04:40:34.542953740Z" level=info msg="StartContainer for \"d92963e0d0343591f679c4978a7e22d5e9e75ebcfc57b6119e47e9d15b77957b\"" Jul 15 04:40:34.545150 containerd[1889]: time="2025-07-15T04:40:34.545117554Z" level=info msg="connecting to shim d92963e0d0343591f679c4978a7e22d5e9e75ebcfc57b6119e47e9d15b77957b" address="unix:///run/containerd/s/6951ae6e5bbca0233bc5bbdcb838705f3d89aefc954ac8da27939e51954548c7" protocol=ttrpc version=3 Jul 15 04:40:34.564873 systemd[1]: Started cri-containerd-d92963e0d0343591f679c4978a7e22d5e9e75ebcfc57b6119e47e9d15b77957b.scope - libcontainer container d92963e0d0343591f679c4978a7e22d5e9e75ebcfc57b6119e47e9d15b77957b. Jul 15 04:40:34.597333 containerd[1889]: time="2025-07-15T04:40:34.597293655Z" level=info msg="StartContainer for \"d92963e0d0343591f679c4978a7e22d5e9e75ebcfc57b6119e47e9d15b77957b\" returns successfully" Jul 15 04:40:34.705906 containerd[1889]: time="2025-07-15T04:40:34.705414144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-7nbf7,Uid:e64aa932-d577-4a4b-8dcc-5e23f244ee64,Namespace:tigera-operator,Attempt:0,}" Jul 15 04:40:34.766661 containerd[1889]: time="2025-07-15T04:40:34.766587635Z" level=info msg="connecting to shim ac3fe96b4627f39cc22fb2c155d120e27d8d56c96ba82b9a7c451b9a6f945a96" address="unix:///run/containerd/s/78bfa035c99c95e1a93eed9e07fda6959460ec597cf3ff9205a590835c0692dd" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:40:34.784866 systemd[1]: Started cri-containerd-ac3fe96b4627f39cc22fb2c155d120e27d8d56c96ba82b9a7c451b9a6f945a96.scope - libcontainer container ac3fe96b4627f39cc22fb2c155d120e27d8d56c96ba82b9a7c451b9a6f945a96. Jul 15 04:40:34.824906 containerd[1889]: time="2025-07-15T04:40:34.824864099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-7nbf7,Uid:e64aa932-d577-4a4b-8dcc-5e23f244ee64,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ac3fe96b4627f39cc22fb2c155d120e27d8d56c96ba82b9a7c451b9a6f945a96\"" Jul 15 04:40:34.827507 containerd[1889]: time="2025-07-15T04:40:34.827465355Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 15 04:40:36.574543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2887056471.mount: Deactivated successfully. Jul 15 04:40:36.961997 containerd[1889]: time="2025-07-15T04:40:36.961928188Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:36.965536 containerd[1889]: time="2025-07-15T04:40:36.965480585Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 15 04:40:36.970575 containerd[1889]: time="2025-07-15T04:40:36.970504913Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:36.975053 containerd[1889]: time="2025-07-15T04:40:36.975007844Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:36.975648 containerd[1889]: time="2025-07-15T04:40:36.975539289Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 2.148038645s" Jul 15 04:40:36.975648 containerd[1889]: time="2025-07-15T04:40:36.975569147Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 15 04:40:36.980728 containerd[1889]: time="2025-07-15T04:40:36.978687559Z" level=info msg="CreateContainer within sandbox \"ac3fe96b4627f39cc22fb2c155d120e27d8d56c96ba82b9a7c451b9a6f945a96\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 15 04:40:37.019069 containerd[1889]: time="2025-07-15T04:40:37.019033941Z" level=info msg="Container 5d65219220dd13475c40cdfa5765e3b82ae04fa9821b409c00b907b924bfffe1: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:40:37.020332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3120608328.mount: Deactivated successfully. Jul 15 04:40:37.041119 containerd[1889]: time="2025-07-15T04:40:37.041078194Z" level=info msg="CreateContainer within sandbox \"ac3fe96b4627f39cc22fb2c155d120e27d8d56c96ba82b9a7c451b9a6f945a96\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5d65219220dd13475c40cdfa5765e3b82ae04fa9821b409c00b907b924bfffe1\"" Jul 15 04:40:37.042037 containerd[1889]: time="2025-07-15T04:40:37.041987631Z" level=info msg="StartContainer for \"5d65219220dd13475c40cdfa5765e3b82ae04fa9821b409c00b907b924bfffe1\"" Jul 15 04:40:37.043074 containerd[1889]: time="2025-07-15T04:40:37.043010815Z" level=info msg="connecting to shim 5d65219220dd13475c40cdfa5765e3b82ae04fa9821b409c00b907b924bfffe1" address="unix:///run/containerd/s/78bfa035c99c95e1a93eed9e07fda6959460ec597cf3ff9205a590835c0692dd" protocol=ttrpc version=3 Jul 15 04:40:37.061867 systemd[1]: Started cri-containerd-5d65219220dd13475c40cdfa5765e3b82ae04fa9821b409c00b907b924bfffe1.scope - libcontainer container 5d65219220dd13475c40cdfa5765e3b82ae04fa9821b409c00b907b924bfffe1. Jul 15 04:40:37.087833 containerd[1889]: time="2025-07-15T04:40:37.087794478Z" level=info msg="StartContainer for \"5d65219220dd13475c40cdfa5765e3b82ae04fa9821b409c00b907b924bfffe1\" returns successfully" Jul 15 04:40:37.543434 kubelet[3397]: I0715 04:40:37.543375 3397 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-58jwr" podStartSLOduration=3.543261541 podStartE2EDuration="3.543261541s" podCreationTimestamp="2025-07-15 04:40:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:40:35.542247259 +0000 UTC m=+7.125448518" watchObservedRunningTime="2025-07-15 04:40:37.543261541 +0000 UTC m=+9.126462792" Jul 15 04:40:39.300508 kubelet[3397]: I0715 04:40:39.300330 3397 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-7nbf7" podStartSLOduration=3.150257727 podStartE2EDuration="5.300314133s" podCreationTimestamp="2025-07-15 04:40:34 +0000 UTC" firstStartedPulling="2025-07-15 04:40:34.826816537 +0000 UTC m=+6.410017780" lastFinishedPulling="2025-07-15 04:40:36.976872935 +0000 UTC m=+8.560074186" observedRunningTime="2025-07-15 04:40:37.543704012 +0000 UTC m=+9.126905255" watchObservedRunningTime="2025-07-15 04:40:39.300314133 +0000 UTC m=+10.883515376" Jul 15 04:40:42.188080 sudo[2378]: pam_unix(sudo:session): session closed for user root Jul 15 04:40:42.264234 sshd[2377]: Connection closed by 10.200.16.10 port 44118 Jul 15 04:40:42.265912 sshd-session[2374]: pam_unix(sshd:session): session closed for user core Jul 15 04:40:42.271112 systemd-logind[1868]: Session 9 logged out. Waiting for processes to exit. Jul 15 04:40:42.272206 systemd[1]: sshd@6-10.200.20.21:22-10.200.16.10:44118.service: Deactivated successfully. Jul 15 04:40:42.278382 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 04:40:42.278831 systemd[1]: session-9.scope: Consumed 3.454s CPU time, 220.2M memory peak. Jul 15 04:40:42.283241 systemd-logind[1868]: Removed session 9. Jul 15 04:40:45.601007 systemd[1]: Created slice kubepods-besteffort-poda934133d_7068_41ff_9e7e_10f3c576dfa0.slice - libcontainer container kubepods-besteffort-poda934133d_7068_41ff_9e7e_10f3c576dfa0.slice. Jul 15 04:40:45.758549 kubelet[3397]: I0715 04:40:45.758475 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a934133d-7068-41ff-9e7e-10f3c576dfa0-typha-certs\") pod \"calico-typha-5ff7487ffc-t466k\" (UID: \"a934133d-7068-41ff-9e7e-10f3c576dfa0\") " pod="calico-system/calico-typha-5ff7487ffc-t466k" Jul 15 04:40:45.758549 kubelet[3397]: I0715 04:40:45.758510 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a934133d-7068-41ff-9e7e-10f3c576dfa0-tigera-ca-bundle\") pod \"calico-typha-5ff7487ffc-t466k\" (UID: \"a934133d-7068-41ff-9e7e-10f3c576dfa0\") " pod="calico-system/calico-typha-5ff7487ffc-t466k" Jul 15 04:40:45.758549 kubelet[3397]: I0715 04:40:45.758527 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlh86\" (UniqueName: \"kubernetes.io/projected/a934133d-7068-41ff-9e7e-10f3c576dfa0-kube-api-access-qlh86\") pod \"calico-typha-5ff7487ffc-t466k\" (UID: \"a934133d-7068-41ff-9e7e-10f3c576dfa0\") " pod="calico-system/calico-typha-5ff7487ffc-t466k" Jul 15 04:40:45.760612 systemd[1]: Created slice kubepods-besteffort-pod6441a699_42f2_43df_9cc4_a3ba4426af9b.slice - libcontainer container kubepods-besteffort-pod6441a699_42f2_43df_9cc4_a3ba4426af9b.slice. Jul 15 04:40:45.859379 kubelet[3397]: I0715 04:40:45.859174 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6441a699-42f2-43df-9cc4-a3ba4426af9b-node-certs\") pod \"calico-node-94r5s\" (UID: \"6441a699-42f2-43df-9cc4-a3ba4426af9b\") " pod="calico-system/calico-node-94r5s" Jul 15 04:40:45.859379 kubelet[3397]: I0715 04:40:45.859302 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6441a699-42f2-43df-9cc4-a3ba4426af9b-var-run-calico\") pod \"calico-node-94r5s\" (UID: \"6441a699-42f2-43df-9cc4-a3ba4426af9b\") " pod="calico-system/calico-node-94r5s" Jul 15 04:40:45.859379 kubelet[3397]: I0715 04:40:45.859320 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zq4s\" (UniqueName: \"kubernetes.io/projected/6441a699-42f2-43df-9cc4-a3ba4426af9b-kube-api-access-5zq4s\") pod \"calico-node-94r5s\" (UID: \"6441a699-42f2-43df-9cc4-a3ba4426af9b\") " pod="calico-system/calico-node-94r5s" Jul 15 04:40:45.859379 kubelet[3397]: I0715 04:40:45.859333 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6441a699-42f2-43df-9cc4-a3ba4426af9b-lib-modules\") pod \"calico-node-94r5s\" (UID: \"6441a699-42f2-43df-9cc4-a3ba4426af9b\") " pod="calico-system/calico-node-94r5s" Jul 15 04:40:45.859379 kubelet[3397]: I0715 04:40:45.859361 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6441a699-42f2-43df-9cc4-a3ba4426af9b-cni-bin-dir\") pod \"calico-node-94r5s\" (UID: \"6441a699-42f2-43df-9cc4-a3ba4426af9b\") " pod="calico-system/calico-node-94r5s" Jul 15 04:40:45.859572 kubelet[3397]: I0715 04:40:45.859380 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6441a699-42f2-43df-9cc4-a3ba4426af9b-policysync\") pod \"calico-node-94r5s\" (UID: \"6441a699-42f2-43df-9cc4-a3ba4426af9b\") " pod="calico-system/calico-node-94r5s" Jul 15 04:40:45.859572 kubelet[3397]: I0715 04:40:45.859389 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6441a699-42f2-43df-9cc4-a3ba4426af9b-tigera-ca-bundle\") pod \"calico-node-94r5s\" (UID: \"6441a699-42f2-43df-9cc4-a3ba4426af9b\") " pod="calico-system/calico-node-94r5s" Jul 15 04:40:45.859572 kubelet[3397]: I0715 04:40:45.859401 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6441a699-42f2-43df-9cc4-a3ba4426af9b-var-lib-calico\") pod \"calico-node-94r5s\" (UID: \"6441a699-42f2-43df-9cc4-a3ba4426af9b\") " pod="calico-system/calico-node-94r5s" Jul 15 04:40:45.860531 kubelet[3397]: I0715 04:40:45.859705 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6441a699-42f2-43df-9cc4-a3ba4426af9b-cni-log-dir\") pod \"calico-node-94r5s\" (UID: \"6441a699-42f2-43df-9cc4-a3ba4426af9b\") " pod="calico-system/calico-node-94r5s" Jul 15 04:40:45.860531 kubelet[3397]: I0715 04:40:45.859762 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6441a699-42f2-43df-9cc4-a3ba4426af9b-xtables-lock\") pod \"calico-node-94r5s\" (UID: \"6441a699-42f2-43df-9cc4-a3ba4426af9b\") " pod="calico-system/calico-node-94r5s" Jul 15 04:40:45.860531 kubelet[3397]: I0715 04:40:45.859782 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6441a699-42f2-43df-9cc4-a3ba4426af9b-cni-net-dir\") pod \"calico-node-94r5s\" (UID: \"6441a699-42f2-43df-9cc4-a3ba4426af9b\") " pod="calico-system/calico-node-94r5s" Jul 15 04:40:45.860531 kubelet[3397]: I0715 04:40:45.859791 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6441a699-42f2-43df-9cc4-a3ba4426af9b-flexvol-driver-host\") pod \"calico-node-94r5s\" (UID: \"6441a699-42f2-43df-9cc4-a3ba4426af9b\") " pod="calico-system/calico-node-94r5s" Jul 15 04:40:45.904580 containerd[1889]: time="2025-07-15T04:40:45.904544743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5ff7487ffc-t466k,Uid:a934133d-7068-41ff-9e7e-10f3c576dfa0,Namespace:calico-system,Attempt:0,}" Jul 15 04:40:45.951964 kubelet[3397]: E0715 04:40:45.951734 3397 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mrdg2" podUID="5ed20801-e92d-42b2-94d6-5d7666efeedc" Jul 15 04:40:45.963890 kubelet[3397]: E0715 04:40:45.963785 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:45.963890 kubelet[3397]: W0715 04:40:45.963821 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:45.963890 kubelet[3397]: E0715 04:40:45.963843 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:45.964308 kubelet[3397]: E0715 04:40:45.964286 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:45.964461 kubelet[3397]: W0715 04:40:45.964358 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:45.966857 kubelet[3397]: E0715 04:40:45.966837 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:45.967049 kubelet[3397]: W0715 04:40:45.966925 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:45.967049 kubelet[3397]: E0715 04:40:45.966945 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:45.967486 kubelet[3397]: E0715 04:40:45.967467 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:45.967664 kubelet[3397]: W0715 04:40:45.967593 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:45.967664 kubelet[3397]: E0715 04:40:45.967613 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:45.970626 kubelet[3397]: E0715 04:40:45.970587 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:45.972466 kubelet[3397]: E0715 04:40:45.972441 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:45.972783 kubelet[3397]: W0715 04:40:45.972651 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:45.972783 kubelet[3397]: E0715 04:40:45.972687 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:45.973437 kubelet[3397]: E0715 04:40:45.973335 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:45.974706 kubelet[3397]: W0715 04:40:45.974537 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:45.975765 kubelet[3397]: E0715 04:40:45.975548 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:45.976199 kubelet[3397]: W0715 04:40:45.975862 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:45.976199 kubelet[3397]: E0715 04:40:45.975888 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:45.976297 kubelet[3397]: E0715 04:40:45.976265 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:45.977274 kubelet[3397]: E0715 04:40:45.977223 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:45.978969 kubelet[3397]: W0715 04:40:45.978774 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:45.978969 kubelet[3397]: E0715 04:40:45.978804 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:45.979164 kubelet[3397]: E0715 04:40:45.979150 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:45.979247 kubelet[3397]: W0715 04:40:45.979234 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:45.979330 kubelet[3397]: E0715 04:40:45.979299 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:45.983893 containerd[1889]: time="2025-07-15T04:40:45.983851660Z" level=info msg="connecting to shim d860f2cf67fcc90e4f0b39a72f150c952323f58b29db297cd1393cf326606797" address="unix:///run/containerd/s/5fe95ebaceb258ba278550908411e88f643d71cac79a7827efce5f0d304fbc53" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:40:46.014992 kubelet[3397]: E0715 04:40:46.013610 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:46.014992 kubelet[3397]: W0715 04:40:46.014826 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:46.014992 kubelet[3397]: E0715 04:40:46.014859 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:46.024514 systemd[1]: Started cri-containerd-d860f2cf67fcc90e4f0b39a72f150c952323f58b29db297cd1393cf326606797.scope - libcontainer container d860f2cf67fcc90e4f0b39a72f150c952323f58b29db297cd1393cf326606797. Jul 15 04:40:46.061884 kubelet[3397]: E0715 04:40:46.061814 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:46.062289 kubelet[3397]: W0715 04:40:46.062265 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:46.062578 kubelet[3397]: E0715 04:40:46.062559 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:46.062704 kubelet[3397]: I0715 04:40:46.062690 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5ed20801-e92d-42b2-94d6-5d7666efeedc-varrun\") pod \"csi-node-driver-mrdg2\" (UID: \"5ed20801-e92d-42b2-94d6-5d7666efeedc\") " pod="calico-system/csi-node-driver-mrdg2" Jul 15 04:40:46.063276 kubelet[3397]: E0715 04:40:46.063254 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:46.063625 kubelet[3397]: W0715 04:40:46.063597 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:46.063695 kubelet[3397]: E0715 04:40:46.063633 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:46.064337 kubelet[3397]: E0715 04:40:46.064321 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:46.064337 kubelet[3397]: W0715 04:40:46.064334 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:46.064515 kubelet[3397]: E0715 04:40:46.064466 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:46.065189 kubelet[3397]: E0715 04:40:46.065048 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:46.065189 kubelet[3397]: W0715 04:40:46.065063 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:46.065189 kubelet[3397]: E0715 04:40:46.065074 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:46.065189 kubelet[3397]: I0715 04:40:46.065092 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq2lb\" (UniqueName: \"kubernetes.io/projected/5ed20801-e92d-42b2-94d6-5d7666efeedc-kube-api-access-sq2lb\") pod \"csi-node-driver-mrdg2\" (UID: \"5ed20801-e92d-42b2-94d6-5d7666efeedc\") " pod="calico-system/csi-node-driver-mrdg2" Jul 15 04:40:46.066070 kubelet[3397]: E0715 04:40:46.065997 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:46.066070 kubelet[3397]: W0715 04:40:46.066014 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:46.066070 kubelet[3397]: E0715 04:40:46.066025 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:46.066070 kubelet[3397]: I0715 04:40:46.066041 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5ed20801-e92d-42b2-94d6-5d7666efeedc-registration-dir\") pod \"csi-node-driver-mrdg2\" (UID: \"5ed20801-e92d-42b2-94d6-5d7666efeedc\") " pod="calico-system/csi-node-driver-mrdg2" Jul 15 04:40:46.066629 kubelet[3397]: E0715 04:40:46.066607 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:46.066629 kubelet[3397]: W0715 04:40:46.066623 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:46.067191 kubelet[3397]: E0715 04:40:46.066634 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:46.067191 kubelet[3397]: I0715 04:40:46.067190 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5ed20801-e92d-42b2-94d6-5d7666efeedc-socket-dir\") pod \"csi-node-driver-mrdg2\" (UID: \"5ed20801-e92d-42b2-94d6-5d7666efeedc\") " pod="calico-system/csi-node-driver-mrdg2" Jul 15 04:40:46.068296 kubelet[3397]: E0715 04:40:46.068277 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:46.068296 kubelet[3397]: W0715 04:40:46.068291 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:46.069152 kubelet[3397]: E0715 04:40:46.069084 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:46.069152 kubelet[3397]: I0715 04:40:46.069118 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5ed20801-e92d-42b2-94d6-5d7666efeedc-kubelet-dir\") pod \"csi-node-driver-mrdg2\" (UID: \"5ed20801-e92d-42b2-94d6-5d7666efeedc\") " pod="calico-system/csi-node-driver-mrdg2" Jul 15 04:40:46.069367 kubelet[3397]: E0715 04:40:46.069336 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:46.069367 kubelet[3397]: W0715 04:40:46.069348 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:46.069533 kubelet[3397]: E0715 04:40:46.069517 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:46.069799 containerd[1889]: time="2025-07-15T04:40:46.069705634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-94r5s,Uid:6441a699-42f2-43df-9cc4-a3ba4426af9b,Namespace:calico-system,Attempt:0,}" Jul 15 04:40:46.069970 kubelet[3397]: E0715 04:40:46.069940 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:46.069970 kubelet[3397]: W0715 04:40:46.069952 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:46.070113 kubelet[3397]: E0715 04:40:46.070087 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:46.070906 kubelet[3397]: E0715 04:40:46.070882 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:46.070906 kubelet[3397]: W0715 04:40:46.070900 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:46.071245 kubelet[3397]: E0715 04:40:46.071061 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:46.071443 kubelet[3397]: E0715 04:40:46.071424 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:46.071443 kubelet[3397]: W0715 04:40:46.071440 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:46.071624 kubelet[3397]: E0715 04:40:46.071468 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:46.072065 kubelet[3397]: E0715 04:40:46.072044 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:46.072355 kubelet[3397]: W0715 04:40:46.072307 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:46.072355 kubelet[3397]: E0715 04:40:46.072330 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:46.072881 kubelet[3397]: E0715 04:40:46.072831 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:46.072881 kubelet[3397]: W0715 04:40:46.072844 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:46.072881 kubelet[3397]: E0715 04:40:46.072855 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:46.073565 kubelet[3397]: E0715 04:40:46.073530 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:46.073565 kubelet[3397]: W0715 04:40:46.073544 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:46.073565 kubelet[3397]: E0715 04:40:46.073555 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:46.074738 kubelet[3397]: E0715 04:40:46.074683 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:46.074738 kubelet[3397]: W0715 04:40:46.074709 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:46.074738 kubelet[3397]: E0715 04:40:46.074744 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:46.099393 containerd[1889]: time="2025-07-15T04:40:46.099329545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5ff7487ffc-t466k,Uid:a934133d-7068-41ff-9e7e-10f3c576dfa0,Namespace:calico-system,Attempt:0,} returns sandbox id \"d860f2cf67fcc90e4f0b39a72f150c952323f58b29db297cd1393cf326606797\"" Jul 15 04:40:46.103525 containerd[1889]: time="2025-07-15T04:40:46.102581880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 15 04:40:46.161203 containerd[1889]: time="2025-07-15T04:40:46.161104301Z" level=info msg="connecting to shim 096f93ac19a1030a16ec4c0110fff9bfb2cd7d878441cf4e528d74531cac4b65" address="unix:///run/containerd/s/8f82333513b00a6e370b3ffe784d57794ca55a85b35afa7ee4903dd28c434aee" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:40:46.170657 kubelet[3397]: E0715 04:40:46.170619 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:46.170866 kubelet[3397]: W0715 04:40:46.170649 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:46.170866 kubelet[3397]: E0715 04:40:46.170749 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:46.171423 kubelet[3397]: E0715 04:40:46.171405 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:46.171423 kubelet[3397]: W0715 04:40:46.171419 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:46.171423 kubelet[3397]: E0715 04:40:46.171435 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:46.171907 kubelet[3397]: E0715 04:40:46.171706 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:46.171907 kubelet[3397]: W0715 04:40:46.171728 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:46.171907 kubelet[3397]: E0715 04:40:46.171749 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:46.172284 kubelet[3397]: E0715 04:40:46.172231 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:46.172488 kubelet[3397]: W0715 04:40:46.172406 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:46.172576 kubelet[3397]: E0715 04:40:46.172563 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:46.172977 kubelet[3397]: E0715 04:40:46.172953 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:46.172977 kubelet[3397]: W0715 04:40:46.172973 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:46.172977 kubelet[3397]: E0715 04:40:46.172987 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:46.173465 kubelet[3397]: E0715 04:40:46.173450 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:46.173891 kubelet[3397]: W0715 04:40:46.173822 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:46.173891 kubelet[3397]: E0715 04:40:46.173853 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:46.174038 kubelet[3397]: E0715 04:40:46.174021 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:46.174038 kubelet[3397]: W0715 04:40:46.174033 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:46.174118 kubelet[3397]: E0715 04:40:46.174048 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:46.174280 kubelet[3397]: E0715 04:40:46.174266 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:46.174280 kubelet[3397]: W0715 04:40:46.174277 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:46.174386 kubelet[3397]: E0715 04:40:46.174320 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:46.174559 kubelet[3397]: E0715 04:40:46.174543 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:46.174559 kubelet[3397]: W0715 04:40:46.174556 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:46.174818 kubelet[3397]: E0715 04:40:46.174584 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:46.174818 kubelet[3397]: E0715 04:40:46.174693 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:46.174818 kubelet[3397]: W0715 04:40:46.174702 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:46.174818 kubelet[3397]: E0715 04:40:46.174746 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:46.175165 kubelet[3397]: E0715 04:40:46.175039 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:46.175165 kubelet[3397]: W0715 04:40:46.175052 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:46.175165 kubelet[3397]: E0715 04:40:46.175155 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:46.175744 kubelet[3397]: E0715 04:40:46.175656 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:46.175744 kubelet[3397]: W0715 04:40:46.175672 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:46.176203 kubelet[3397]: E0715 04:40:46.175970 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:46.176778 kubelet[3397]: E0715 04:40:46.176636 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:46.176778 kubelet[3397]: W0715 04:40:46.176648 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:46.176778 kubelet[3397]: E0715 04:40:46.176666 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:46.177264 kubelet[3397]: E0715 04:40:46.177242 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:46.177468 kubelet[3397]: W0715 04:40:46.177355 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:46.177468 kubelet[3397]: E0715 04:40:46.177445 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:46.178218 kubelet[3397]: E0715 04:40:46.178020 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:46.178218 kubelet[3397]: W0715 04:40:46.178039 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:46.178468 kubelet[3397]: E0715 04:40:46.178452 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:46.178643 kubelet[3397]: E0715 04:40:46.178629 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:46.178785 kubelet[3397]: W0715 04:40:46.178705 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:46.178857 kubelet[3397]: E0715 04:40:46.178839 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:46.179025 kubelet[3397]: E0715 04:40:46.179014 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:46.179133 kubelet[3397]: W0715 04:40:46.179083 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:46.179133 kubelet[3397]: E0715 04:40:46.179115 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:46.179309 kubelet[3397]: E0715 04:40:46.179299 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:46.179672 kubelet[3397]: W0715 04:40:46.179553 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:46.179810 kubelet[3397]: E0715 04:40:46.179787 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:46.180197 kubelet[3397]: E0715 04:40:46.180148 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:46.180197 kubelet[3397]: W0715 04:40:46.180161 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:46.180476 kubelet[3397]: E0715 04:40:46.180443 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:46.180476 kubelet[3397]: W0715 04:40:46.180455 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:46.180781 kubelet[3397]: E0715 04:40:46.180690 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:46.180781 kubelet[3397]: W0715 04:40:46.180745 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:46.180963 kubelet[3397]: E0715 04:40:46.180942 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:46.181165 kubelet[3397]: E0715 04:40:46.181036 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:46.181165 kubelet[3397]: W0715 04:40:46.181048 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:46.181165 kubelet[3397]: E0715 04:40:46.181059 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:46.181247 kubelet[3397]: E0715 04:40:46.181230 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:46.181263 kubelet[3397]: E0715 04:40:46.181252 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:46.181497 kubelet[3397]: E0715 04:40:46.181484 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:46.181646 kubelet[3397]: W0715 04:40:46.181569 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:46.181646 kubelet[3397]: E0715 04:40:46.181601 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:46.181867 kubelet[3397]: E0715 04:40:46.181856 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:46.182191 kubelet[3397]: W0715 04:40:46.181948 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:46.182191 kubelet[3397]: E0715 04:40:46.181996 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:46.182644 kubelet[3397]: E0715 04:40:46.182630 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:46.182825 kubelet[3397]: W0715 04:40:46.182729 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:46.182984 kubelet[3397]: E0715 04:40:46.182896 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:46.186978 systemd[1]: Started cri-containerd-096f93ac19a1030a16ec4c0110fff9bfb2cd7d878441cf4e528d74531cac4b65.scope - libcontainer container 096f93ac19a1030a16ec4c0110fff9bfb2cd7d878441cf4e528d74531cac4b65. Jul 15 04:40:46.195966 kubelet[3397]: E0715 04:40:46.195941 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:46.195966 kubelet[3397]: W0715 04:40:46.195959 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:46.196078 kubelet[3397]: E0715 04:40:46.195976 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:46.239631 containerd[1889]: time="2025-07-15T04:40:46.239486954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-94r5s,Uid:6441a699-42f2-43df-9cc4-a3ba4426af9b,Namespace:calico-system,Attempt:0,} returns sandbox id \"096f93ac19a1030a16ec4c0110fff9bfb2cd7d878441cf4e528d74531cac4b65\"" Jul 15 04:40:47.472056 kubelet[3397]: E0715 04:40:47.471969 3397 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mrdg2" podUID="5ed20801-e92d-42b2-94d6-5d7666efeedc" Jul 15 04:40:47.732222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2621728319.mount: Deactivated successfully. Jul 15 04:40:48.128377 containerd[1889]: time="2025-07-15T04:40:48.127899061Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:48.132955 containerd[1889]: time="2025-07-15T04:40:48.132926081Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 15 04:40:48.138754 containerd[1889]: time="2025-07-15T04:40:48.138724336Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:48.145962 containerd[1889]: time="2025-07-15T04:40:48.145930455Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:48.146550 containerd[1889]: time="2025-07-15T04:40:48.146274619Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 2.04365941s" Jul 15 04:40:48.146550 containerd[1889]: time="2025-07-15T04:40:48.146306068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 15 04:40:48.147133 containerd[1889]: time="2025-07-15T04:40:48.147117088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 15 04:40:48.162599 containerd[1889]: time="2025-07-15T04:40:48.162550448Z" level=info msg="CreateContainer within sandbox \"d860f2cf67fcc90e4f0b39a72f150c952323f58b29db297cd1393cf326606797\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 15 04:40:48.211311 containerd[1889]: time="2025-07-15T04:40:48.211258333Z" level=info msg="Container aadac86a26578695ca73b90a4a0da5994c59f0440cf8d029968c91279c054746: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:40:48.238148 containerd[1889]: time="2025-07-15T04:40:48.238101189Z" level=info msg="CreateContainer within sandbox \"d860f2cf67fcc90e4f0b39a72f150c952323f58b29db297cd1393cf326606797\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"aadac86a26578695ca73b90a4a0da5994c59f0440cf8d029968c91279c054746\"" Jul 15 04:40:48.239081 containerd[1889]: time="2025-07-15T04:40:48.238802237Z" level=info msg="StartContainer for \"aadac86a26578695ca73b90a4a0da5994c59f0440cf8d029968c91279c054746\"" Jul 15 04:40:48.239814 containerd[1889]: time="2025-07-15T04:40:48.239787326Z" level=info msg="connecting to shim aadac86a26578695ca73b90a4a0da5994c59f0440cf8d029968c91279c054746" address="unix:///run/containerd/s/5fe95ebaceb258ba278550908411e88f643d71cac79a7827efce5f0d304fbc53" protocol=ttrpc version=3 Jul 15 04:40:48.257911 systemd[1]: Started cri-containerd-aadac86a26578695ca73b90a4a0da5994c59f0440cf8d029968c91279c054746.scope - libcontainer container aadac86a26578695ca73b90a4a0da5994c59f0440cf8d029968c91279c054746. Jul 15 04:40:48.295343 containerd[1889]: time="2025-07-15T04:40:48.295289396Z" level=info msg="StartContainer for \"aadac86a26578695ca73b90a4a0da5994c59f0440cf8d029968c91279c054746\" returns successfully" Jul 15 04:40:48.576116 kubelet[3397]: E0715 04:40:48.575212 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:48.577316 kubelet[3397]: W0715 04:40:48.576166 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:48.577316 kubelet[3397]: E0715 04:40:48.576198 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:48.577911 kubelet[3397]: E0715 04:40:48.577663 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:48.577911 kubelet[3397]: W0715 04:40:48.577679 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:48.577911 kubelet[3397]: E0715 04:40:48.577693 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:48.578312 kubelet[3397]: E0715 04:40:48.578060 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:48.578312 kubelet[3397]: W0715 04:40:48.578115 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:48.578312 kubelet[3397]: E0715 04:40:48.578146 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:48.579816 kubelet[3397]: E0715 04:40:48.579789 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:48.579915 kubelet[3397]: W0715 04:40:48.579902 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:48.579981 kubelet[3397]: E0715 04:40:48.579969 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:48.580378 kubelet[3397]: E0715 04:40:48.580361 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:48.580578 kubelet[3397]: W0715 04:40:48.580461 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:48.580578 kubelet[3397]: E0715 04:40:48.580480 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:48.580804 kubelet[3397]: E0715 04:40:48.580792 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:48.580942 kubelet[3397]: W0715 04:40:48.580830 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:48.580942 kubelet[3397]: E0715 04:40:48.580844 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:48.581152 kubelet[3397]: E0715 04:40:48.581119 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:48.581152 kubelet[3397]: W0715 04:40:48.581130 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:48.581275 kubelet[3397]: E0715 04:40:48.581204 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:48.581505 kubelet[3397]: E0715 04:40:48.581486 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:48.581739 kubelet[3397]: W0715 04:40:48.581561 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:48.581739 kubelet[3397]: E0715 04:40:48.581601 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:48.582018 kubelet[3397]: E0715 04:40:48.582004 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:48.582174 kubelet[3397]: W0715 04:40:48.582066 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:48.582174 kubelet[3397]: E0715 04:40:48.582079 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:48.582445 kubelet[3397]: E0715 04:40:48.582354 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:48.582445 kubelet[3397]: W0715 04:40:48.582380 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:48.582445 kubelet[3397]: E0715 04:40:48.582391 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:48.582650 kubelet[3397]: E0715 04:40:48.582638 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:48.582761 kubelet[3397]: W0715 04:40:48.582694 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:48.582881 kubelet[3397]: E0715 04:40:48.582707 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:48.583786 kubelet[3397]: E0715 04:40:48.583773 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:48.583949 kubelet[3397]: W0715 04:40:48.583848 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:48.583949 kubelet[3397]: E0715 04:40:48.583865 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:48.584116 kubelet[3397]: E0715 04:40:48.584106 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:48.584181 kubelet[3397]: W0715 04:40:48.584171 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:48.584385 kubelet[3397]: E0715 04:40:48.584272 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:48.584482 kubelet[3397]: E0715 04:40:48.584471 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:48.584537 kubelet[3397]: W0715 04:40:48.584527 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:48.584584 kubelet[3397]: E0715 04:40:48.584575 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:48.584794 kubelet[3397]: E0715 04:40:48.584782 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:48.584924 kubelet[3397]: W0715 04:40:48.584867 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:48.584924 kubelet[3397]: E0715 04:40:48.584882 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:48.593127 kubelet[3397]: E0715 04:40:48.593104 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:48.593127 kubelet[3397]: W0715 04:40:48.593122 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:48.593217 kubelet[3397]: E0715 04:40:48.593135 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:48.593321 kubelet[3397]: E0715 04:40:48.593308 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:48.593321 kubelet[3397]: W0715 04:40:48.593318 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:48.593370 kubelet[3397]: E0715 04:40:48.593334 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:48.593508 kubelet[3397]: E0715 04:40:48.593493 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:48.593508 kubelet[3397]: W0715 04:40:48.593503 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:48.593508 kubelet[3397]: E0715 04:40:48.593513 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:48.593690 kubelet[3397]: E0715 04:40:48.593678 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:48.593690 kubelet[3397]: W0715 04:40:48.593687 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:48.593764 kubelet[3397]: E0715 04:40:48.593700 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:48.593848 kubelet[3397]: E0715 04:40:48.593837 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:48.593848 kubelet[3397]: W0715 04:40:48.593846 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:48.593900 kubelet[3397]: E0715 04:40:48.593861 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:48.593965 kubelet[3397]: E0715 04:40:48.593954 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:48.593965 kubelet[3397]: W0715 04:40:48.593961 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:48.593999 kubelet[3397]: E0715 04:40:48.593973 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:48.594127 kubelet[3397]: E0715 04:40:48.594114 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:48.594127 kubelet[3397]: W0715 04:40:48.594123 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:48.594175 kubelet[3397]: E0715 04:40:48.594132 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:48.594377 kubelet[3397]: E0715 04:40:48.594359 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:48.594541 kubelet[3397]: W0715 04:40:48.594435 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:48.594541 kubelet[3397]: E0715 04:40:48.594460 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:48.594669 kubelet[3397]: E0715 04:40:48.594658 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:48.594748 kubelet[3397]: W0715 04:40:48.594709 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:48.594817 kubelet[3397]: E0715 04:40:48.594800 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:48.595067 kubelet[3397]: E0715 04:40:48.594981 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:48.595067 kubelet[3397]: W0715 04:40:48.594994 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:48.595067 kubelet[3397]: E0715 04:40:48.595011 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:48.595196 kubelet[3397]: E0715 04:40:48.595185 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:48.595345 kubelet[3397]: W0715 04:40:48.595229 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:48.595345 kubelet[3397]: E0715 04:40:48.595249 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:48.595465 kubelet[3397]: E0715 04:40:48.595453 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:48.595515 kubelet[3397]: W0715 04:40:48.595505 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:48.595570 kubelet[3397]: E0715 04:40:48.595561 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:48.596089 kubelet[3397]: E0715 04:40:48.595819 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:48.596089 kubelet[3397]: W0715 04:40:48.595834 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:48.596089 kubelet[3397]: E0715 04:40:48.595851 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:48.596089 kubelet[3397]: E0715 04:40:48.596013 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:48.596089 kubelet[3397]: W0715 04:40:48.596022 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:48.596089 kubelet[3397]: E0715 04:40:48.596032 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:48.596270 kubelet[3397]: E0715 04:40:48.596131 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:48.596270 kubelet[3397]: W0715 04:40:48.596136 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:48.596270 kubelet[3397]: E0715 04:40:48.596145 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:48.596438 kubelet[3397]: E0715 04:40:48.596424 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:48.596438 kubelet[3397]: W0715 04:40:48.596434 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:48.596494 kubelet[3397]: E0715 04:40:48.596447 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:48.596696 kubelet[3397]: E0715 04:40:48.596681 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:48.596696 kubelet[3397]: W0715 04:40:48.596692 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:48.596782 kubelet[3397]: E0715 04:40:48.596703 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:48.596928 kubelet[3397]: E0715 04:40:48.596911 3397 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:48.596928 kubelet[3397]: W0715 04:40:48.596921 3397 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:48.596928 kubelet[3397]: E0715 04:40:48.596927 3397 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:49.349792 containerd[1889]: time="2025-07-15T04:40:49.349680600Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:49.356521 containerd[1889]: time="2025-07-15T04:40:49.356483009Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 15 04:40:49.360152 containerd[1889]: time="2025-07-15T04:40:49.360121509Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:49.366755 containerd[1889]: time="2025-07-15T04:40:49.366673222Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:49.367314 containerd[1889]: time="2025-07-15T04:40:49.367069964Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.219855784s" Jul 15 04:40:49.367314 containerd[1889]: time="2025-07-15T04:40:49.367106221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 15 04:40:49.369879 containerd[1889]: time="2025-07-15T04:40:49.369854987Z" level=info msg="CreateContainer within sandbox \"096f93ac19a1030a16ec4c0110fff9bfb2cd7d878441cf4e528d74531cac4b65\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 15 04:40:49.412814 containerd[1889]: time="2025-07-15T04:40:49.411365465Z" level=info msg="Container 48019303f134ede90949b27f3cf9c976ed1092dd7547269aa5e3036e7565675a: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:40:49.412565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1258303499.mount: Deactivated successfully. Jul 15 04:40:49.447119 containerd[1889]: time="2025-07-15T04:40:49.447060368Z" level=info msg="CreateContainer within sandbox \"096f93ac19a1030a16ec4c0110fff9bfb2cd7d878441cf4e528d74531cac4b65\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"48019303f134ede90949b27f3cf9c976ed1092dd7547269aa5e3036e7565675a\"" Jul 15 04:40:49.448048 containerd[1889]: time="2025-07-15T04:40:49.447909869Z" level=info msg="StartContainer for \"48019303f134ede90949b27f3cf9c976ed1092dd7547269aa5e3036e7565675a\"" Jul 15 04:40:49.450573 containerd[1889]: time="2025-07-15T04:40:49.450542568Z" level=info msg="connecting to shim 48019303f134ede90949b27f3cf9c976ed1092dd7547269aa5e3036e7565675a" address="unix:///run/containerd/s/8f82333513b00a6e370b3ffe784d57794ca55a85b35afa7ee4903dd28c434aee" protocol=ttrpc version=3 Jul 15 04:40:49.468872 systemd[1]: Started cri-containerd-48019303f134ede90949b27f3cf9c976ed1092dd7547269aa5e3036e7565675a.scope - libcontainer container 48019303f134ede90949b27f3cf9c976ed1092dd7547269aa5e3036e7565675a. Jul 15 04:40:49.471962 kubelet[3397]: E0715 04:40:49.471914 3397 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mrdg2" podUID="5ed20801-e92d-42b2-94d6-5d7666efeedc" Jul 15 04:40:49.503259 containerd[1889]: time="2025-07-15T04:40:49.503135721Z" level=info msg="StartContainer for \"48019303f134ede90949b27f3cf9c976ed1092dd7547269aa5e3036e7565675a\" returns successfully" Jul 15 04:40:49.510931 systemd[1]: cri-containerd-48019303f134ede90949b27f3cf9c976ed1092dd7547269aa5e3036e7565675a.scope: Deactivated successfully. Jul 15 04:40:49.514889 containerd[1889]: time="2025-07-15T04:40:49.514839666Z" level=info msg="TaskExit event in podsandbox handler container_id:\"48019303f134ede90949b27f3cf9c976ed1092dd7547269aa5e3036e7565675a\" id:\"48019303f134ede90949b27f3cf9c976ed1092dd7547269aa5e3036e7565675a\" pid:4046 exited_at:{seconds:1752554449 nanos:514082920}" Jul 15 04:40:49.515102 containerd[1889]: time="2025-07-15T04:40:49.514993312Z" level=info msg="received exit event container_id:\"48019303f134ede90949b27f3cf9c976ed1092dd7547269aa5e3036e7565675a\" id:\"48019303f134ede90949b27f3cf9c976ed1092dd7547269aa5e3036e7565675a\" pid:4046 exited_at:{seconds:1752554449 nanos:514082920}" Jul 15 04:40:49.532409 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-48019303f134ede90949b27f3cf9c976ed1092dd7547269aa5e3036e7565675a-rootfs.mount: Deactivated successfully. Jul 15 04:40:49.568158 kubelet[3397]: I0715 04:40:49.567797 3397 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 04:40:49.595544 kubelet[3397]: I0715 04:40:49.595486 3397 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5ff7487ffc-t466k" podStartSLOduration=2.55023315 podStartE2EDuration="4.595468989s" podCreationTimestamp="2025-07-15 04:40:45 +0000 UTC" firstStartedPulling="2025-07-15 04:40:46.10175942 +0000 UTC m=+17.684960663" lastFinishedPulling="2025-07-15 04:40:48.146995259 +0000 UTC m=+19.730196502" observedRunningTime="2025-07-15 04:40:48.578481554 +0000 UTC m=+20.161682845" watchObservedRunningTime="2025-07-15 04:40:49.595468989 +0000 UTC m=+21.178670240" Jul 15 04:40:51.471606 kubelet[3397]: E0715 04:40:51.471539 3397 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mrdg2" podUID="5ed20801-e92d-42b2-94d6-5d7666efeedc" Jul 15 04:40:51.575606 containerd[1889]: time="2025-07-15T04:40:51.575566285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 15 04:40:53.471875 kubelet[3397]: E0715 04:40:53.471812 3397 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mrdg2" podUID="5ed20801-e92d-42b2-94d6-5d7666efeedc" Jul 15 04:40:54.256945 containerd[1889]: time="2025-07-15T04:40:54.256886718Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:54.260187 containerd[1889]: time="2025-07-15T04:40:54.260150788Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 15 04:40:54.266016 containerd[1889]: time="2025-07-15T04:40:54.265957926Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:54.272518 containerd[1889]: time="2025-07-15T04:40:54.272459624Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:54.273124 containerd[1889]: time="2025-07-15T04:40:54.272965409Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.697361947s" Jul 15 04:40:54.273124 containerd[1889]: time="2025-07-15T04:40:54.272991554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 15 04:40:54.275081 containerd[1889]: time="2025-07-15T04:40:54.275049783Z" level=info msg="CreateContainer within sandbox \"096f93ac19a1030a16ec4c0110fff9bfb2cd7d878441cf4e528d74531cac4b65\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 15 04:40:54.313631 containerd[1889]: time="2025-07-15T04:40:54.313587059Z" level=info msg="Container e4f32af799affd87234373a27f68974f8c69fa2c458fa2416a1800b715668965: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:40:54.349153 containerd[1889]: time="2025-07-15T04:40:54.348707412Z" level=info msg="CreateContainer within sandbox \"096f93ac19a1030a16ec4c0110fff9bfb2cd7d878441cf4e528d74531cac4b65\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e4f32af799affd87234373a27f68974f8c69fa2c458fa2416a1800b715668965\"" Jul 15 04:40:54.349561 containerd[1889]: time="2025-07-15T04:40:54.349535504Z" level=info msg="StartContainer for \"e4f32af799affd87234373a27f68974f8c69fa2c458fa2416a1800b715668965\"" Jul 15 04:40:54.351491 containerd[1889]: time="2025-07-15T04:40:54.351454024Z" level=info msg="connecting to shim e4f32af799affd87234373a27f68974f8c69fa2c458fa2416a1800b715668965" address="unix:///run/containerd/s/8f82333513b00a6e370b3ffe784d57794ca55a85b35afa7ee4903dd28c434aee" protocol=ttrpc version=3 Jul 15 04:40:54.367853 systemd[1]: Started cri-containerd-e4f32af799affd87234373a27f68974f8c69fa2c458fa2416a1800b715668965.scope - libcontainer container e4f32af799affd87234373a27f68974f8c69fa2c458fa2416a1800b715668965. Jul 15 04:40:54.400531 containerd[1889]: time="2025-07-15T04:40:54.400489852Z" level=info msg="StartContainer for \"e4f32af799affd87234373a27f68974f8c69fa2c458fa2416a1800b715668965\" returns successfully" Jul 15 04:40:55.471620 kubelet[3397]: E0715 04:40:55.471561 3397 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mrdg2" podUID="5ed20801-e92d-42b2-94d6-5d7666efeedc" Jul 15 04:40:55.675896 kubelet[3397]: I0715 04:40:55.675845 3397 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 04:40:55.902530 containerd[1889]: time="2025-07-15T04:40:55.902488253Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 04:40:55.903926 systemd[1]: cri-containerd-e4f32af799affd87234373a27f68974f8c69fa2c458fa2416a1800b715668965.scope: Deactivated successfully. Jul 15 04:40:55.905921 containerd[1889]: time="2025-07-15T04:40:55.905836093Z" level=info msg="received exit event container_id:\"e4f32af799affd87234373a27f68974f8c69fa2c458fa2416a1800b715668965\" id:\"e4f32af799affd87234373a27f68974f8c69fa2c458fa2416a1800b715668965\" pid:4104 exited_at:{seconds:1752554455 nanos:905597573}" Jul 15 04:40:55.905877 systemd[1]: cri-containerd-e4f32af799affd87234373a27f68974f8c69fa2c458fa2416a1800b715668965.scope: Consumed 335ms CPU time, 187.1M memory peak, 165.8M written to disk. Jul 15 04:40:55.906653 containerd[1889]: time="2025-07-15T04:40:55.906625703Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e4f32af799affd87234373a27f68974f8c69fa2c458fa2416a1800b715668965\" id:\"e4f32af799affd87234373a27f68974f8c69fa2c458fa2416a1800b715668965\" pid:4104 exited_at:{seconds:1752554455 nanos:905597573}" Jul 15 04:40:55.923969 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4f32af799affd87234373a27f68974f8c69fa2c458fa2416a1800b715668965-rootfs.mount: Deactivated successfully. Jul 15 04:40:56.003698 kubelet[3397]: I0715 04:40:56.003670 3397 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 15 04:40:56.128788 kubelet[3397]: W0715 04:40:56.060909 3397 reflector.go:561] object-"calico-system"/"goldmane-key-pair": failed to list *v1.Secret: secrets "goldmane-key-pair" is forbidden: User "system:node:ci-4396.0.0-n-9104e8bf1a" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4396.0.0-n-9104e8bf1a' and this object Jul 15 04:40:56.128788 kubelet[3397]: E0715 04:40:56.060961 3397 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"goldmane-key-pair\" is forbidden: User \"system:node:ci-4396.0.0-n-9104e8bf1a\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4396.0.0-n-9104e8bf1a' and this object" logger="UnhandledError" Jul 15 04:40:56.128788 kubelet[3397]: W0715 04:40:56.061001 3397 reflector.go:561] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: configmaps "goldmane-ca-bundle" is forbidden: User "system:node:ci-4396.0.0-n-9104e8bf1a" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4396.0.0-n-9104e8bf1a' and this object Jul 15 04:40:56.128788 kubelet[3397]: E0715 04:40:56.061010 3397 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane-ca-bundle\" is forbidden: User \"system:node:ci-4396.0.0-n-9104e8bf1a\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4396.0.0-n-9104e8bf1a' and this object" logger="UnhandledError" Jul 15 04:40:56.046424 systemd[1]: Created slice kubepods-burstable-pod63a73d2d_8d64_4a1f_9338_cce59b96a36a.slice - libcontainer container kubepods-burstable-pod63a73d2d_8d64_4a1f_9338_cce59b96a36a.slice. Jul 15 04:40:56.065210 systemd[1]: Created slice kubepods-besteffort-pod75552206_ca3b_4a57_8b32_0477c8fbc72b.slice - libcontainer container kubepods-besteffort-pod75552206_ca3b_4a57_8b32_0477c8fbc72b.slice. Jul 15 04:40:56.071552 systemd[1]: Created slice kubepods-burstable-pod22d074d0_17db_4b77_9198_a899ad91ed6c.slice - libcontainer container kubepods-burstable-pod22d074d0_17db_4b77_9198_a899ad91ed6c.slice. Jul 15 04:40:56.076580 systemd[1]: Created slice kubepods-besteffort-pod38bed32a_910d_4564_a20c_a0e4fa024f2c.slice - libcontainer container kubepods-besteffort-pod38bed32a_910d_4564_a20c_a0e4fa024f2c.slice. Jul 15 04:40:56.082215 systemd[1]: Created slice kubepods-besteffort-podd891310b_e597_4d2a_831e_002c63441df3.slice - libcontainer container kubepods-besteffort-podd891310b_e597_4d2a_831e_002c63441df3.slice. Jul 15 04:40:56.088034 systemd[1]: Created slice kubepods-besteffort-pod5f5712fa_77a5_4c75_abb7_89d823859b2f.slice - libcontainer container kubepods-besteffort-pod5f5712fa_77a5_4c75_abb7_89d823859b2f.slice. Jul 15 04:40:56.093704 systemd[1]: Created slice kubepods-besteffort-podb7d75f8f_50ff_438d_b84d_f23bd8cfd99f.slice - libcontainer container kubepods-besteffort-podb7d75f8f_50ff_438d_b84d_f23bd8cfd99f.slice. Jul 15 04:40:56.101133 systemd[1]: Created slice kubepods-besteffort-pod6a8e1578_4086_49fa_9074_cde2d003aafe.slice - libcontainer container kubepods-besteffort-pod6a8e1578_4086_49fa_9074_cde2d003aafe.slice. Jul 15 04:40:56.139274 kubelet[3397]: I0715 04:40:56.139232 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppg6c\" (UniqueName: \"kubernetes.io/projected/b7d75f8f-50ff-438d-b84d-f23bd8cfd99f-kube-api-access-ppg6c\") pod \"whisker-5d4cf7bb47-thq78\" (UID: \"b7d75f8f-50ff-438d-b84d-f23bd8cfd99f\") " pod="calico-system/whisker-5d4cf7bb47-thq78" Jul 15 04:40:56.139274 kubelet[3397]: I0715 04:40:56.139268 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63a73d2d-8d64-4a1f-9338-cce59b96a36a-config-volume\") pod \"coredns-7c65d6cfc9-kzd65\" (UID: \"63a73d2d-8d64-4a1f-9338-cce59b96a36a\") " pod="kube-system/coredns-7c65d6cfc9-kzd65" Jul 15 04:40:56.139274 kubelet[3397]: I0715 04:40:56.139286 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7d75f8f-50ff-438d-b84d-f23bd8cfd99f-whisker-ca-bundle\") pod \"whisker-5d4cf7bb47-thq78\" (UID: \"b7d75f8f-50ff-438d-b84d-f23bd8cfd99f\") " pod="calico-system/whisker-5d4cf7bb47-thq78" Jul 15 04:40:56.139454 kubelet[3397]: I0715 04:40:56.139297 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pm5n\" (UniqueName: \"kubernetes.io/projected/63a73d2d-8d64-4a1f-9338-cce59b96a36a-kube-api-access-4pm5n\") pod \"coredns-7c65d6cfc9-kzd65\" (UID: \"63a73d2d-8d64-4a1f-9338-cce59b96a36a\") " pod="kube-system/coredns-7c65d6cfc9-kzd65" Jul 15 04:40:56.139454 kubelet[3397]: I0715 04:40:56.139307 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6a8e1578-4086-49fa-9074-cde2d003aafe-calico-apiserver-certs\") pod \"calico-apiserver-f76686786-25zvj\" (UID: \"6a8e1578-4086-49fa-9074-cde2d003aafe\") " pod="calico-apiserver/calico-apiserver-f76686786-25zvj" Jul 15 04:40:56.139454 kubelet[3397]: I0715 04:40:56.139319 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b7d75f8f-50ff-438d-b84d-f23bd8cfd99f-whisker-backend-key-pair\") pod \"whisker-5d4cf7bb47-thq78\" (UID: \"b7d75f8f-50ff-438d-b84d-f23bd8cfd99f\") " pod="calico-system/whisker-5d4cf7bb47-thq78" Jul 15 04:40:56.139454 kubelet[3397]: I0715 04:40:56.139332 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8vwr\" (UniqueName: \"kubernetes.io/projected/22d074d0-17db-4b77-9198-a899ad91ed6c-kube-api-access-q8vwr\") pod \"coredns-7c65d6cfc9-kmdcq\" (UID: \"22d074d0-17db-4b77-9198-a899ad91ed6c\") " pod="kube-system/coredns-7c65d6cfc9-kmdcq" Jul 15 04:40:56.139454 kubelet[3397]: I0715 04:40:56.139343 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38bed32a-910d-4564-a20c-a0e4fa024f2c-config\") pod \"goldmane-58fd7646b9-fghwk\" (UID: \"38bed32a-910d-4564-a20c-a0e4fa024f2c\") " pod="calico-system/goldmane-58fd7646b9-fghwk" Jul 15 04:40:56.139543 kubelet[3397]: I0715 04:40:56.139355 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkqfv\" (UniqueName: \"kubernetes.io/projected/5f5712fa-77a5-4c75-abb7-89d823859b2f-kube-api-access-dkqfv\") pod \"calico-apiserver-65866b6cfd-dcrr6\" (UID: \"5f5712fa-77a5-4c75-abb7-89d823859b2f\") " pod="calico-apiserver/calico-apiserver-65866b6cfd-dcrr6" Jul 15 04:40:56.139543 kubelet[3397]: I0715 04:40:56.139365 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22d074d0-17db-4b77-9198-a899ad91ed6c-config-volume\") pod \"coredns-7c65d6cfc9-kmdcq\" (UID: \"22d074d0-17db-4b77-9198-a899ad91ed6c\") " pod="kube-system/coredns-7c65d6cfc9-kmdcq" Jul 15 04:40:56.139543 kubelet[3397]: I0715 04:40:56.139375 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5f5712fa-77a5-4c75-abb7-89d823859b2f-calico-apiserver-certs\") pod \"calico-apiserver-65866b6cfd-dcrr6\" (UID: \"5f5712fa-77a5-4c75-abb7-89d823859b2f\") " pod="calico-apiserver/calico-apiserver-65866b6cfd-dcrr6" Jul 15 04:40:56.139543 kubelet[3397]: I0715 04:40:56.139388 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m92bx\" (UniqueName: \"kubernetes.io/projected/d891310b-e597-4d2a-831e-002c63441df3-kube-api-access-m92bx\") pod \"calico-apiserver-65866b6cfd-95btq\" (UID: \"d891310b-e597-4d2a-831e-002c63441df3\") " pod="calico-apiserver/calico-apiserver-65866b6cfd-95btq" Jul 15 04:40:56.139543 kubelet[3397]: I0715 04:40:56.139400 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s68hm\" (UniqueName: \"kubernetes.io/projected/75552206-ca3b-4a57-8b32-0477c8fbc72b-kube-api-access-s68hm\") pod \"calico-kube-controllers-576b877669-tvxw7\" (UID: \"75552206-ca3b-4a57-8b32-0477c8fbc72b\") " pod="calico-system/calico-kube-controllers-576b877669-tvxw7" Jul 15 04:40:56.139628 kubelet[3397]: I0715 04:40:56.139412 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d891310b-e597-4d2a-831e-002c63441df3-calico-apiserver-certs\") pod \"calico-apiserver-65866b6cfd-95btq\" (UID: \"d891310b-e597-4d2a-831e-002c63441df3\") " pod="calico-apiserver/calico-apiserver-65866b6cfd-95btq" Jul 15 04:40:56.139628 kubelet[3397]: I0715 04:40:56.139422 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87f78\" (UniqueName: \"kubernetes.io/projected/38bed32a-910d-4564-a20c-a0e4fa024f2c-kube-api-access-87f78\") pod \"goldmane-58fd7646b9-fghwk\" (UID: \"38bed32a-910d-4564-a20c-a0e4fa024f2c\") " pod="calico-system/goldmane-58fd7646b9-fghwk" Jul 15 04:40:56.139628 kubelet[3397]: I0715 04:40:56.139434 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75552206-ca3b-4a57-8b32-0477c8fbc72b-tigera-ca-bundle\") pod \"calico-kube-controllers-576b877669-tvxw7\" (UID: \"75552206-ca3b-4a57-8b32-0477c8fbc72b\") " pod="calico-system/calico-kube-controllers-576b877669-tvxw7" Jul 15 04:40:56.139628 kubelet[3397]: I0715 04:40:56.139444 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qbr5\" (UniqueName: \"kubernetes.io/projected/6a8e1578-4086-49fa-9074-cde2d003aafe-kube-api-access-9qbr5\") pod \"calico-apiserver-f76686786-25zvj\" (UID: \"6a8e1578-4086-49fa-9074-cde2d003aafe\") " pod="calico-apiserver/calico-apiserver-f76686786-25zvj" Jul 15 04:40:56.139628 kubelet[3397]: I0715 04:40:56.139456 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/38bed32a-910d-4564-a20c-a0e4fa024f2c-goldmane-key-pair\") pod \"goldmane-58fd7646b9-fghwk\" (UID: \"38bed32a-910d-4564-a20c-a0e4fa024f2c\") " pod="calico-system/goldmane-58fd7646b9-fghwk" Jul 15 04:40:56.139708 kubelet[3397]: I0715 04:40:56.139467 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38bed32a-910d-4564-a20c-a0e4fa024f2c-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-fghwk\" (UID: \"38bed32a-910d-4564-a20c-a0e4fa024f2c\") " pod="calico-system/goldmane-58fd7646b9-fghwk" Jul 15 04:40:56.436608 containerd[1889]: time="2025-07-15T04:40:56.436437213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65866b6cfd-95btq,Uid:d891310b-e597-4d2a-831e-002c63441df3,Namespace:calico-apiserver,Attempt:0,}" Jul 15 04:40:56.437663 containerd[1889]: time="2025-07-15T04:40:56.436515944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-576b877669-tvxw7,Uid:75552206-ca3b-4a57-8b32-0477c8fbc72b,Namespace:calico-system,Attempt:0,}" Jul 15 04:40:56.438200 containerd[1889]: time="2025-07-15T04:40:56.436561241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d4cf7bb47-thq78,Uid:b7d75f8f-50ff-438d-b84d-f23bd8cfd99f,Namespace:calico-system,Attempt:0,}" Jul 15 04:40:56.438200 containerd[1889]: time="2025-07-15T04:40:56.437280986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65866b6cfd-dcrr6,Uid:5f5712fa-77a5-4c75-abb7-89d823859b2f,Namespace:calico-apiserver,Attempt:0,}" Jul 15 04:40:56.438300 containerd[1889]: time="2025-07-15T04:40:56.437314275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f76686786-25zvj,Uid:6a8e1578-4086-49fa-9074-cde2d003aafe,Namespace:calico-apiserver,Attempt:0,}" Jul 15 04:40:56.438300 containerd[1889]: time="2025-07-15T04:40:56.437334275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kmdcq,Uid:22d074d0-17db-4b77-9198-a899ad91ed6c,Namespace:kube-system,Attempt:0,}" Jul 15 04:40:56.438300 containerd[1889]: time="2025-07-15T04:40:56.437385941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kzd65,Uid:63a73d2d-8d64-4a1f-9338-cce59b96a36a,Namespace:kube-system,Attempt:0,}" Jul 15 04:40:56.569817 containerd[1889]: time="2025-07-15T04:40:56.569692624Z" level=error msg="Failed to destroy network for sandbox \"d9b5137b2a2e0dfc5415bd54708d5d92240b655a6b61fb8adafd1323c2955dee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:56.578255 containerd[1889]: time="2025-07-15T04:40:56.578134787Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-576b877669-tvxw7,Uid:75552206-ca3b-4a57-8b32-0477c8fbc72b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9b5137b2a2e0dfc5415bd54708d5d92240b655a6b61fb8adafd1323c2955dee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:56.578424 kubelet[3397]: E0715 04:40:56.578348 3397 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9b5137b2a2e0dfc5415bd54708d5d92240b655a6b61fb8adafd1323c2955dee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:56.578666 kubelet[3397]: E0715 04:40:56.578438 3397 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9b5137b2a2e0dfc5415bd54708d5d92240b655a6b61fb8adafd1323c2955dee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-576b877669-tvxw7" Jul 15 04:40:56.578666 kubelet[3397]: E0715 04:40:56.578509 3397 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9b5137b2a2e0dfc5415bd54708d5d92240b655a6b61fb8adafd1323c2955dee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-576b877669-tvxw7" Jul 15 04:40:56.578666 kubelet[3397]: E0715 04:40:56.578647 3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-576b877669-tvxw7_calico-system(75552206-ca3b-4a57-8b32-0477c8fbc72b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-576b877669-tvxw7_calico-system(75552206-ca3b-4a57-8b32-0477c8fbc72b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d9b5137b2a2e0dfc5415bd54708d5d92240b655a6b61fb8adafd1323c2955dee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-576b877669-tvxw7" podUID="75552206-ca3b-4a57-8b32-0477c8fbc72b" Jul 15 04:40:56.595104 containerd[1889]: time="2025-07-15T04:40:56.594898733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 15 04:40:56.613147 containerd[1889]: time="2025-07-15T04:40:56.613102775Z" level=error msg="Failed to destroy network for sandbox \"668aafd8212f4a2afce76661a070d779fe74d0c3e6ea549a15883e148c538867\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:56.618531 containerd[1889]: time="2025-07-15T04:40:56.618409217Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65866b6cfd-95btq,Uid:d891310b-e597-4d2a-831e-002c63441df3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"668aafd8212f4a2afce76661a070d779fe74d0c3e6ea549a15883e148c538867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:56.619452 kubelet[3397]: E0715 04:40:56.618702 3397 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"668aafd8212f4a2afce76661a070d779fe74d0c3e6ea549a15883e148c538867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:56.619452 kubelet[3397]: E0715 04:40:56.618784 3397 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"668aafd8212f4a2afce76661a070d779fe74d0c3e6ea549a15883e148c538867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65866b6cfd-95btq" Jul 15 04:40:56.619452 kubelet[3397]: E0715 04:40:56.618799 3397 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"668aafd8212f4a2afce76661a070d779fe74d0c3e6ea549a15883e148c538867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65866b6cfd-95btq" Jul 15 04:40:56.619573 kubelet[3397]: E0715 04:40:56.618835 3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65866b6cfd-95btq_calico-apiserver(d891310b-e597-4d2a-831e-002c63441df3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65866b6cfd-95btq_calico-apiserver(d891310b-e597-4d2a-831e-002c63441df3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"668aafd8212f4a2afce76661a070d779fe74d0c3e6ea549a15883e148c538867\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65866b6cfd-95btq" podUID="d891310b-e597-4d2a-831e-002c63441df3" Jul 15 04:40:56.649997 containerd[1889]: time="2025-07-15T04:40:56.649951938Z" level=error msg="Failed to destroy network for sandbox \"34b42151184a9d695a9e6dbd94c4294079552939fe1717005498eab54abad094\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:56.657132 containerd[1889]: time="2025-07-15T04:40:56.657064944Z" level=error msg="Failed to destroy network for sandbox \"248f7d061a619b9c515e29fb102c778bacb650210abf21173001f50bfd702a74\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:56.661529 containerd[1889]: time="2025-07-15T04:40:56.661483308Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d4cf7bb47-thq78,Uid:b7d75f8f-50ff-438d-b84d-f23bd8cfd99f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"34b42151184a9d695a9e6dbd94c4294079552939fe1717005498eab54abad094\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:56.662273 kubelet[3397]: E0715 04:40:56.662219 3397 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34b42151184a9d695a9e6dbd94c4294079552939fe1717005498eab54abad094\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:56.662402 kubelet[3397]: E0715 04:40:56.662381 3397 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34b42151184a9d695a9e6dbd94c4294079552939fe1717005498eab54abad094\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d4cf7bb47-thq78" Jul 15 04:40:56.662445 kubelet[3397]: E0715 04:40:56.662403 3397 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34b42151184a9d695a9e6dbd94c4294079552939fe1717005498eab54abad094\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d4cf7bb47-thq78" Jul 15 04:40:56.662477 kubelet[3397]: E0715 04:40:56.662454 3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5d4cf7bb47-thq78_calico-system(b7d75f8f-50ff-438d-b84d-f23bd8cfd99f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5d4cf7bb47-thq78_calico-system(b7d75f8f-50ff-438d-b84d-f23bd8cfd99f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"34b42151184a9d695a9e6dbd94c4294079552939fe1717005498eab54abad094\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5d4cf7bb47-thq78" podUID="b7d75f8f-50ff-438d-b84d-f23bd8cfd99f" Jul 15 04:40:56.668584 containerd[1889]: time="2025-07-15T04:40:56.668542241Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65866b6cfd-dcrr6,Uid:5f5712fa-77a5-4c75-abb7-89d823859b2f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"248f7d061a619b9c515e29fb102c778bacb650210abf21173001f50bfd702a74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:56.668961 kubelet[3397]: E0715 04:40:56.668918 3397 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"248f7d061a619b9c515e29fb102c778bacb650210abf21173001f50bfd702a74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:56.669063 kubelet[3397]: E0715 04:40:56.668973 3397 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"248f7d061a619b9c515e29fb102c778bacb650210abf21173001f50bfd702a74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65866b6cfd-dcrr6" Jul 15 04:40:56.669063 kubelet[3397]: E0715 04:40:56.668989 3397 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"248f7d061a619b9c515e29fb102c778bacb650210abf21173001f50bfd702a74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65866b6cfd-dcrr6" Jul 15 04:40:56.669063 kubelet[3397]: E0715 04:40:56.669020 3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65866b6cfd-dcrr6_calico-apiserver(5f5712fa-77a5-4c75-abb7-89d823859b2f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65866b6cfd-dcrr6_calico-apiserver(5f5712fa-77a5-4c75-abb7-89d823859b2f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"248f7d061a619b9c515e29fb102c778bacb650210abf21173001f50bfd702a74\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65866b6cfd-dcrr6" podUID="5f5712fa-77a5-4c75-abb7-89d823859b2f" Jul 15 04:40:56.683501 containerd[1889]: time="2025-07-15T04:40:56.683379522Z" level=error msg="Failed to destroy network for sandbox \"c18b8de251effe1f4b96abfc6ed5199a9d300e73958b330cd17cffcec7b731e4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:56.688227 containerd[1889]: time="2025-07-15T04:40:56.687639265Z" level=error msg="Failed to destroy network for sandbox \"a5d5ed75a489f9c58d34793dcfc1cc1d3a5c13e183bcbfac75699967529f8040\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:56.688395 containerd[1889]: time="2025-07-15T04:40:56.688237093Z" level=error msg="Failed to destroy network for sandbox \"6e681e01e1c25307fe42d3a2b14fb7d29b93f50b12981349b4dc94140c1bbaf2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:56.689087 containerd[1889]: time="2025-07-15T04:40:56.689054929Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f76686786-25zvj,Uid:6a8e1578-4086-49fa-9074-cde2d003aafe,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c18b8de251effe1f4b96abfc6ed5199a9d300e73958b330cd17cffcec7b731e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:56.689574 kubelet[3397]: E0715 04:40:56.689473 3397 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c18b8de251effe1f4b96abfc6ed5199a9d300e73958b330cd17cffcec7b731e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:56.689574 kubelet[3397]: E0715 04:40:56.689548 3397 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c18b8de251effe1f4b96abfc6ed5199a9d300e73958b330cd17cffcec7b731e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f76686786-25zvj" Jul 15 04:40:56.689770 kubelet[3397]: E0715 04:40:56.689563 3397 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c18b8de251effe1f4b96abfc6ed5199a9d300e73958b330cd17cffcec7b731e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f76686786-25zvj" Jul 15 04:40:56.689856 kubelet[3397]: E0715 04:40:56.689831 3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f76686786-25zvj_calico-apiserver(6a8e1578-4086-49fa-9074-cde2d003aafe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f76686786-25zvj_calico-apiserver(6a8e1578-4086-49fa-9074-cde2d003aafe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c18b8de251effe1f4b96abfc6ed5199a9d300e73958b330cd17cffcec7b731e4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f76686786-25zvj" podUID="6a8e1578-4086-49fa-9074-cde2d003aafe" Jul 15 04:40:56.694198 containerd[1889]: time="2025-07-15T04:40:56.694164716Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kmdcq,Uid:22d074d0-17db-4b77-9198-a899ad91ed6c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5d5ed75a489f9c58d34793dcfc1cc1d3a5c13e183bcbfac75699967529f8040\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:56.694578 kubelet[3397]: E0715 04:40:56.694522 3397 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5d5ed75a489f9c58d34793dcfc1cc1d3a5c13e183bcbfac75699967529f8040\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:56.694578 kubelet[3397]: E0715 04:40:56.694575 3397 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5d5ed75a489f9c58d34793dcfc1cc1d3a5c13e183bcbfac75699967529f8040\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-kmdcq" Jul 15 04:40:56.695791 kubelet[3397]: E0715 04:40:56.694588 3397 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5d5ed75a489f9c58d34793dcfc1cc1d3a5c13e183bcbfac75699967529f8040\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-kmdcq" Jul 15 04:40:56.695791 kubelet[3397]: E0715 04:40:56.694621 3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-kmdcq_kube-system(22d074d0-17db-4b77-9198-a899ad91ed6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-kmdcq_kube-system(22d074d0-17db-4b77-9198-a899ad91ed6c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5d5ed75a489f9c58d34793dcfc1cc1d3a5c13e183bcbfac75699967529f8040\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-kmdcq" podUID="22d074d0-17db-4b77-9198-a899ad91ed6c" Jul 15 04:40:56.699478 containerd[1889]: time="2025-07-15T04:40:56.699438509Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kzd65,Uid:63a73d2d-8d64-4a1f-9338-cce59b96a36a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e681e01e1c25307fe42d3a2b14fb7d29b93f50b12981349b4dc94140c1bbaf2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:56.699708 kubelet[3397]: E0715 04:40:56.699669 3397 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e681e01e1c25307fe42d3a2b14fb7d29b93f50b12981349b4dc94140c1bbaf2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:56.699834 kubelet[3397]: E0715 04:40:56.699805 3397 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e681e01e1c25307fe42d3a2b14fb7d29b93f50b12981349b4dc94140c1bbaf2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-kzd65" Jul 15 04:40:56.699863 kubelet[3397]: E0715 04:40:56.699832 3397 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e681e01e1c25307fe42d3a2b14fb7d29b93f50b12981349b4dc94140c1bbaf2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-kzd65" Jul 15 04:40:56.699893 kubelet[3397]: E0715 04:40:56.699871 3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-kzd65_kube-system(63a73d2d-8d64-4a1f-9338-cce59b96a36a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-kzd65_kube-system(63a73d2d-8d64-4a1f-9338-cce59b96a36a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e681e01e1c25307fe42d3a2b14fb7d29b93f50b12981349b4dc94140c1bbaf2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-kzd65" podUID="63a73d2d-8d64-4a1f-9338-cce59b96a36a" Jul 15 04:40:57.247059 kubelet[3397]: E0715 04:40:57.246947 3397 configmap.go:193] Couldn't get configMap calico-system/goldmane-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 15 04:40:57.247059 kubelet[3397]: E0715 04:40:57.247059 3397 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/38bed32a-910d-4564-a20c-a0e4fa024f2c-goldmane-ca-bundle podName:38bed32a-910d-4564-a20c-a0e4fa024f2c nodeName:}" failed. No retries permitted until 2025-07-15 04:40:57.747030209 +0000 UTC m=+29.330231452 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "goldmane-ca-bundle" (UniqueName: "kubernetes.io/configmap/38bed32a-910d-4564-a20c-a0e4fa024f2c-goldmane-ca-bundle") pod "goldmane-58fd7646b9-fghwk" (UID: "38bed32a-910d-4564-a20c-a0e4fa024f2c") : failed to sync configmap cache: timed out waiting for the condition Jul 15 04:40:57.476645 systemd[1]: Created slice kubepods-besteffort-pod5ed20801_e92d_42b2_94d6_5d7666efeedc.slice - libcontainer container kubepods-besteffort-pod5ed20801_e92d_42b2_94d6_5d7666efeedc.slice. Jul 15 04:40:57.478942 containerd[1889]: time="2025-07-15T04:40:57.478906483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mrdg2,Uid:5ed20801-e92d-42b2-94d6-5d7666efeedc,Namespace:calico-system,Attempt:0,}" Jul 15 04:40:57.527989 containerd[1889]: time="2025-07-15T04:40:57.527836969Z" level=error msg="Failed to destroy network for sandbox \"1bc38a21712691b9f3121e60d0692853a65a2619287aaf4ed840e4828a540d45\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:57.530051 systemd[1]: run-netns-cni\x2d4adb2188\x2dc6a4\x2d17ae\x2ddd15\x2db846cb64f223.mount: Deactivated successfully. Jul 15 04:40:57.532057 containerd[1889]: time="2025-07-15T04:40:57.532015632Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mrdg2,Uid:5ed20801-e92d-42b2-94d6-5d7666efeedc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bc38a21712691b9f3121e60d0692853a65a2619287aaf4ed840e4828a540d45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:57.532590 kubelet[3397]: E0715 04:40:57.532437 3397 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bc38a21712691b9f3121e60d0692853a65a2619287aaf4ed840e4828a540d45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:57.532590 kubelet[3397]: E0715 04:40:57.532492 3397 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bc38a21712691b9f3121e60d0692853a65a2619287aaf4ed840e4828a540d45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mrdg2" Jul 15 04:40:57.532590 kubelet[3397]: E0715 04:40:57.532507 3397 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bc38a21712691b9f3121e60d0692853a65a2619287aaf4ed840e4828a540d45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mrdg2" Jul 15 04:40:57.532818 kubelet[3397]: E0715 04:40:57.532549 3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mrdg2_calico-system(5ed20801-e92d-42b2-94d6-5d7666efeedc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mrdg2_calico-system(5ed20801-e92d-42b2-94d6-5d7666efeedc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1bc38a21712691b9f3121e60d0692853a65a2619287aaf4ed840e4828a540d45\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mrdg2" podUID="5ed20801-e92d-42b2-94d6-5d7666efeedc" Jul 15 04:40:57.940735 containerd[1889]: time="2025-07-15T04:40:57.938340704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-fghwk,Uid:38bed32a-910d-4564-a20c-a0e4fa024f2c,Namespace:calico-system,Attempt:0,}" Jul 15 04:40:58.008868 containerd[1889]: time="2025-07-15T04:40:58.008771384Z" level=error msg="Failed to destroy network for sandbox \"7e1656741da1ab2e4ac9d7c0d999fe3d0c8fa3ddd5534053065da913afb0c4db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:58.012315 systemd[1]: run-netns-cni\x2deb21cb7c\x2db42c\x2dc5d8\x2d6c64\x2ddc141951bf80.mount: Deactivated successfully. Jul 15 04:40:58.013938 containerd[1889]: time="2025-07-15T04:40:58.013815432Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-fghwk,Uid:38bed32a-910d-4564-a20c-a0e4fa024f2c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e1656741da1ab2e4ac9d7c0d999fe3d0c8fa3ddd5534053065da913afb0c4db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:58.014523 kubelet[3397]: E0715 04:40:58.014099 3397 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e1656741da1ab2e4ac9d7c0d999fe3d0c8fa3ddd5534053065da913afb0c4db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:58.014523 kubelet[3397]: E0715 04:40:58.014160 3397 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e1656741da1ab2e4ac9d7c0d999fe3d0c8fa3ddd5534053065da913afb0c4db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-fghwk" Jul 15 04:40:58.014523 kubelet[3397]: E0715 04:40:58.014182 3397 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e1656741da1ab2e4ac9d7c0d999fe3d0c8fa3ddd5534053065da913afb0c4db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-fghwk" Jul 15 04:40:58.014870 kubelet[3397]: E0715 04:40:58.014220 3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-fghwk_calico-system(38bed32a-910d-4564-a20c-a0e4fa024f2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-fghwk_calico-system(38bed32a-910d-4564-a20c-a0e4fa024f2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e1656741da1ab2e4ac9d7c0d999fe3d0c8fa3ddd5534053065da913afb0c4db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-fghwk" podUID="38bed32a-910d-4564-a20c-a0e4fa024f2c" Jul 15 04:41:00.606864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2283928599.mount: Deactivated successfully. Jul 15 04:41:01.063772 containerd[1889]: time="2025-07-15T04:41:01.063260702Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:01.068794 containerd[1889]: time="2025-07-15T04:41:01.068738982Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 15 04:41:01.076121 containerd[1889]: time="2025-07-15T04:41:01.076056830Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:01.083966 containerd[1889]: time="2025-07-15T04:41:01.083884976Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:01.084511 containerd[1889]: time="2025-07-15T04:41:01.084226228Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 4.489292126s" Jul 15 04:41:01.084511 containerd[1889]: time="2025-07-15T04:41:01.084256733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 15 04:41:01.094500 containerd[1889]: time="2025-07-15T04:41:01.094123470Z" level=info msg="CreateContainer within sandbox \"096f93ac19a1030a16ec4c0110fff9bfb2cd7d878441cf4e528d74531cac4b65\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 15 04:41:01.151585 containerd[1889]: time="2025-07-15T04:41:01.150574644Z" level=info msg="Container 1c0d25990b4de3a820bc3720ff72d4f48f3cca443e9425deb22155493046d0ee: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:41:01.154130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2456706296.mount: Deactivated successfully. Jul 15 04:41:01.200871 containerd[1889]: time="2025-07-15T04:41:01.200824874Z" level=info msg="CreateContainer within sandbox \"096f93ac19a1030a16ec4c0110fff9bfb2cd7d878441cf4e528d74531cac4b65\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1c0d25990b4de3a820bc3720ff72d4f48f3cca443e9425deb22155493046d0ee\"" Jul 15 04:41:01.201755 containerd[1889]: time="2025-07-15T04:41:01.201646591Z" level=info msg="StartContainer for \"1c0d25990b4de3a820bc3720ff72d4f48f3cca443e9425deb22155493046d0ee\"" Jul 15 04:41:01.203868 containerd[1889]: time="2025-07-15T04:41:01.203787986Z" level=info msg="connecting to shim 1c0d25990b4de3a820bc3720ff72d4f48f3cca443e9425deb22155493046d0ee" address="unix:///run/containerd/s/8f82333513b00a6e370b3ffe784d57794ca55a85b35afa7ee4903dd28c434aee" protocol=ttrpc version=3 Jul 15 04:41:01.220899 systemd[1]: Started cri-containerd-1c0d25990b4de3a820bc3720ff72d4f48f3cca443e9425deb22155493046d0ee.scope - libcontainer container 1c0d25990b4de3a820bc3720ff72d4f48f3cca443e9425deb22155493046d0ee. Jul 15 04:41:01.258525 containerd[1889]: time="2025-07-15T04:41:01.258479851Z" level=info msg="StartContainer for \"1c0d25990b4de3a820bc3720ff72d4f48f3cca443e9425deb22155493046d0ee\" returns successfully" Jul 15 04:41:01.489753 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 15 04:41:01.489909 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 15 04:41:01.672311 kubelet[3397]: I0715 04:41:01.672261 3397 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppg6c\" (UniqueName: \"kubernetes.io/projected/b7d75f8f-50ff-438d-b84d-f23bd8cfd99f-kube-api-access-ppg6c\") pod \"b7d75f8f-50ff-438d-b84d-f23bd8cfd99f\" (UID: \"b7d75f8f-50ff-438d-b84d-f23bd8cfd99f\") " Jul 15 04:41:01.674533 kubelet[3397]: I0715 04:41:01.673800 3397 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7d75f8f-50ff-438d-b84d-f23bd8cfd99f-whisker-ca-bundle\") pod \"b7d75f8f-50ff-438d-b84d-f23bd8cfd99f\" (UID: \"b7d75f8f-50ff-438d-b84d-f23bd8cfd99f\") " Jul 15 04:41:01.674533 kubelet[3397]: I0715 04:41:01.673985 3397 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b7d75f8f-50ff-438d-b84d-f23bd8cfd99f-whisker-backend-key-pair\") pod \"b7d75f8f-50ff-438d-b84d-f23bd8cfd99f\" (UID: \"b7d75f8f-50ff-438d-b84d-f23bd8cfd99f\") " Jul 15 04:41:01.675347 kubelet[3397]: I0715 04:41:01.675317 3397 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7d75f8f-50ff-438d-b84d-f23bd8cfd99f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "b7d75f8f-50ff-438d-b84d-f23bd8cfd99f" (UID: "b7d75f8f-50ff-438d-b84d-f23bd8cfd99f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 15 04:41:01.684358 systemd[1]: var-lib-kubelet-pods-b7d75f8f\x2d50ff\x2d438d\x2db84d\x2df23bd8cfd99f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dppg6c.mount: Deactivated successfully. Jul 15 04:41:01.691449 kubelet[3397]: I0715 04:41:01.691356 3397 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7d75f8f-50ff-438d-b84d-f23bd8cfd99f-kube-api-access-ppg6c" (OuterVolumeSpecName: "kube-api-access-ppg6c") pod "b7d75f8f-50ff-438d-b84d-f23bd8cfd99f" (UID: "b7d75f8f-50ff-438d-b84d-f23bd8cfd99f"). InnerVolumeSpecName "kube-api-access-ppg6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 15 04:41:01.692431 kubelet[3397]: I0715 04:41:01.692286 3397 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7d75f8f-50ff-438d-b84d-f23bd8cfd99f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "b7d75f8f-50ff-438d-b84d-f23bd8cfd99f" (UID: "b7d75f8f-50ff-438d-b84d-f23bd8cfd99f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 15 04:41:01.692686 systemd[1]: var-lib-kubelet-pods-b7d75f8f\x2d50ff\x2d438d\x2db84d\x2df23bd8cfd99f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 15 04:41:01.775240 kubelet[3397]: I0715 04:41:01.774795 3397 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppg6c\" (UniqueName: \"kubernetes.io/projected/b7d75f8f-50ff-438d-b84d-f23bd8cfd99f-kube-api-access-ppg6c\") on node \"ci-4396.0.0-n-9104e8bf1a\" DevicePath \"\"" Jul 15 04:41:01.775240 kubelet[3397]: I0715 04:41:01.774880 3397 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7d75f8f-50ff-438d-b84d-f23bd8cfd99f-whisker-ca-bundle\") on node \"ci-4396.0.0-n-9104e8bf1a\" DevicePath \"\"" Jul 15 04:41:01.775240 kubelet[3397]: I0715 04:41:01.774888 3397 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b7d75f8f-50ff-438d-b84d-f23bd8cfd99f-whisker-backend-key-pair\") on node \"ci-4396.0.0-n-9104e8bf1a\" DevicePath \"\"" Jul 15 04:41:02.483181 systemd[1]: Removed slice kubepods-besteffort-podb7d75f8f_50ff_438d_b84d_f23bd8cfd99f.slice - libcontainer container kubepods-besteffort-podb7d75f8f_50ff_438d_b84d_f23bd8cfd99f.slice. Jul 15 04:41:02.634362 kubelet[3397]: I0715 04:41:02.633662 3397 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-94r5s" podStartSLOduration=2.789931496 podStartE2EDuration="17.633351796s" podCreationTimestamp="2025-07-15 04:40:45 +0000 UTC" firstStartedPulling="2025-07-15 04:40:46.241482815 +0000 UTC m=+17.824684058" lastFinishedPulling="2025-07-15 04:41:01.084903115 +0000 UTC m=+32.668104358" observedRunningTime="2025-07-15 04:41:01.659199795 +0000 UTC m=+33.242401038" watchObservedRunningTime="2025-07-15 04:41:02.633351796 +0000 UTC m=+34.216553039" Jul 15 04:41:02.703222 systemd[1]: Created slice kubepods-besteffort-poddfe936e8_bd17_4fe6_8003_485f085677d2.slice - libcontainer container kubepods-besteffort-poddfe936e8_bd17_4fe6_8003_485f085677d2.slice. Jul 15 04:41:02.783971 kubelet[3397]: I0715 04:41:02.783757 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfe936e8-bd17-4fe6-8003-485f085677d2-whisker-ca-bundle\") pod \"whisker-6797759d77-x4sdx\" (UID: \"dfe936e8-bd17-4fe6-8003-485f085677d2\") " pod="calico-system/whisker-6797759d77-x4sdx" Jul 15 04:41:02.783971 kubelet[3397]: I0715 04:41:02.783830 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dfe936e8-bd17-4fe6-8003-485f085677d2-whisker-backend-key-pair\") pod \"whisker-6797759d77-x4sdx\" (UID: \"dfe936e8-bd17-4fe6-8003-485f085677d2\") " pod="calico-system/whisker-6797759d77-x4sdx" Jul 15 04:41:02.783971 kubelet[3397]: I0715 04:41:02.783877 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cjp4\" (UniqueName: \"kubernetes.io/projected/dfe936e8-bd17-4fe6-8003-485f085677d2-kube-api-access-7cjp4\") pod \"whisker-6797759d77-x4sdx\" (UID: \"dfe936e8-bd17-4fe6-8003-485f085677d2\") " pod="calico-system/whisker-6797759d77-x4sdx" Jul 15 04:41:03.008465 containerd[1889]: time="2025-07-15T04:41:03.008366121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6797759d77-x4sdx,Uid:dfe936e8-bd17-4fe6-8003-485f085677d2,Namespace:calico-system,Attempt:0,}" Jul 15 04:41:03.183371 systemd-networkd[1479]: cali18f915f91c0: Link UP Jul 15 04:41:03.184387 systemd-networkd[1479]: cali18f915f91c0: Gained carrier Jul 15 04:41:03.204141 containerd[1889]: 2025-07-15 04:41:03.056 [INFO][4546] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 04:41:03.204141 containerd[1889]: 2025-07-15 04:41:03.092 [INFO][4546] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4396.0.0--n--9104e8bf1a-k8s-whisker--6797759d77--x4sdx-eth0 whisker-6797759d77- calico-system dfe936e8-bd17-4fe6-8003-485f085677d2 881 0 2025-07-15 04:41:02 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6797759d77 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4396.0.0-n-9104e8bf1a whisker-6797759d77-x4sdx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali18f915f91c0 [] [] }} ContainerID="5978e095e6e108c79276e6175bf853ba1319be38801eac2a440a2e0f5f00cd9d" Namespace="calico-system" Pod="whisker-6797759d77-x4sdx" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-whisker--6797759d77--x4sdx-" Jul 15 04:41:03.204141 containerd[1889]: 2025-07-15 04:41:03.092 [INFO][4546] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5978e095e6e108c79276e6175bf853ba1319be38801eac2a440a2e0f5f00cd9d" Namespace="calico-system" Pod="whisker-6797759d77-x4sdx" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-whisker--6797759d77--x4sdx-eth0" Jul 15 04:41:03.204141 containerd[1889]: 2025-07-15 04:41:03.127 [INFO][4563] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5978e095e6e108c79276e6175bf853ba1319be38801eac2a440a2e0f5f00cd9d" HandleID="k8s-pod-network.5978e095e6e108c79276e6175bf853ba1319be38801eac2a440a2e0f5f00cd9d" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-whisker--6797759d77--x4sdx-eth0" Jul 15 04:41:03.204911 containerd[1889]: 2025-07-15 04:41:03.127 [INFO][4563] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5978e095e6e108c79276e6175bf853ba1319be38801eac2a440a2e0f5f00cd9d" HandleID="k8s-pod-network.5978e095e6e108c79276e6175bf853ba1319be38801eac2a440a2e0f5f00cd9d" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-whisker--6797759d77--x4sdx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024af60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4396.0.0-n-9104e8bf1a", "pod":"whisker-6797759d77-x4sdx", "timestamp":"2025-07-15 04:41:03.127221183 +0000 UTC"}, Hostname:"ci-4396.0.0-n-9104e8bf1a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:41:03.204911 containerd[1889]: 2025-07-15 04:41:03.127 [INFO][4563] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:41:03.204911 containerd[1889]: 2025-07-15 04:41:03.127 [INFO][4563] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:41:03.204911 containerd[1889]: 2025-07-15 04:41:03.127 [INFO][4563] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4396.0.0-n-9104e8bf1a' Jul 15 04:41:03.204911 containerd[1889]: 2025-07-15 04:41:03.135 [INFO][4563] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5978e095e6e108c79276e6175bf853ba1319be38801eac2a440a2e0f5f00cd9d" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:03.204911 containerd[1889]: 2025-07-15 04:41:03.140 [INFO][4563] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:03.204911 containerd[1889]: 2025-07-15 04:41:03.144 [INFO][4563] ipam/ipam.go 511: Trying affinity for 192.168.32.192/26 host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:03.204911 containerd[1889]: 2025-07-15 04:41:03.146 [INFO][4563] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.192/26 host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:03.204911 containerd[1889]: 2025-07-15 04:41:03.148 [INFO][4563] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.192/26 host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:03.205316 containerd[1889]: 2025-07-15 04:41:03.148 [INFO][4563] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.32.192/26 handle="k8s-pod-network.5978e095e6e108c79276e6175bf853ba1319be38801eac2a440a2e0f5f00cd9d" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:03.205316 containerd[1889]: 2025-07-15 04:41:03.150 [INFO][4563] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5978e095e6e108c79276e6175bf853ba1319be38801eac2a440a2e0f5f00cd9d Jul 15 04:41:03.205316 containerd[1889]: 2025-07-15 04:41:03.160 [INFO][4563] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.32.192/26 handle="k8s-pod-network.5978e095e6e108c79276e6175bf853ba1319be38801eac2a440a2e0f5f00cd9d" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:03.205316 containerd[1889]: 2025-07-15 04:41:03.168 [INFO][4563] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.32.193/26] block=192.168.32.192/26 handle="k8s-pod-network.5978e095e6e108c79276e6175bf853ba1319be38801eac2a440a2e0f5f00cd9d" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:03.205316 containerd[1889]: 2025-07-15 04:41:03.168 [INFO][4563] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.193/26] handle="k8s-pod-network.5978e095e6e108c79276e6175bf853ba1319be38801eac2a440a2e0f5f00cd9d" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:03.205316 containerd[1889]: 2025-07-15 04:41:03.168 [INFO][4563] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:41:03.205316 containerd[1889]: 2025-07-15 04:41:03.168 [INFO][4563] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.193/26] IPv6=[] ContainerID="5978e095e6e108c79276e6175bf853ba1319be38801eac2a440a2e0f5f00cd9d" HandleID="k8s-pod-network.5978e095e6e108c79276e6175bf853ba1319be38801eac2a440a2e0f5f00cd9d" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-whisker--6797759d77--x4sdx-eth0" Jul 15 04:41:03.205539 containerd[1889]: 2025-07-15 04:41:03.173 [INFO][4546] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5978e095e6e108c79276e6175bf853ba1319be38801eac2a440a2e0f5f00cd9d" Namespace="calico-system" Pod="whisker-6797759d77-x4sdx" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-whisker--6797759d77--x4sdx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--9104e8bf1a-k8s-whisker--6797759d77--x4sdx-eth0", GenerateName:"whisker-6797759d77-", Namespace:"calico-system", SelfLink:"", UID:"dfe936e8-bd17-4fe6-8003-485f085677d2", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 41, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6797759d77", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-9104e8bf1a", ContainerID:"", Pod:"whisker-6797759d77-x4sdx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.32.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali18f915f91c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:41:03.205539 containerd[1889]: 2025-07-15 04:41:03.173 [INFO][4546] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.193/32] ContainerID="5978e095e6e108c79276e6175bf853ba1319be38801eac2a440a2e0f5f00cd9d" Namespace="calico-system" Pod="whisker-6797759d77-x4sdx" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-whisker--6797759d77--x4sdx-eth0" Jul 15 04:41:03.205809 containerd[1889]: 2025-07-15 04:41:03.173 [INFO][4546] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali18f915f91c0 ContainerID="5978e095e6e108c79276e6175bf853ba1319be38801eac2a440a2e0f5f00cd9d" Namespace="calico-system" Pod="whisker-6797759d77-x4sdx" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-whisker--6797759d77--x4sdx-eth0" Jul 15 04:41:03.205809 containerd[1889]: 2025-07-15 04:41:03.184 [INFO][4546] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5978e095e6e108c79276e6175bf853ba1319be38801eac2a440a2e0f5f00cd9d" Namespace="calico-system" Pod="whisker-6797759d77-x4sdx" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-whisker--6797759d77--x4sdx-eth0" Jul 15 04:41:03.205958 containerd[1889]: 2025-07-15 04:41:03.185 [INFO][4546] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5978e095e6e108c79276e6175bf853ba1319be38801eac2a440a2e0f5f00cd9d" Namespace="calico-system" Pod="whisker-6797759d77-x4sdx" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-whisker--6797759d77--x4sdx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--9104e8bf1a-k8s-whisker--6797759d77--x4sdx-eth0", GenerateName:"whisker-6797759d77-", Namespace:"calico-system", SelfLink:"", UID:"dfe936e8-bd17-4fe6-8003-485f085677d2", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 41, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6797759d77", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-9104e8bf1a", ContainerID:"5978e095e6e108c79276e6175bf853ba1319be38801eac2a440a2e0f5f00cd9d", Pod:"whisker-6797759d77-x4sdx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.32.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali18f915f91c0", MAC:"3a:09:22:bc:db:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:41:03.206040 containerd[1889]: 2025-07-15 04:41:03.201 [INFO][4546] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5978e095e6e108c79276e6175bf853ba1319be38801eac2a440a2e0f5f00cd9d" Namespace="calico-system" Pod="whisker-6797759d77-x4sdx" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-whisker--6797759d77--x4sdx-eth0" Jul 15 04:41:03.271818 containerd[1889]: time="2025-07-15T04:41:03.271765294Z" level=info msg="connecting to shim 5978e095e6e108c79276e6175bf853ba1319be38801eac2a440a2e0f5f00cd9d" address="unix:///run/containerd/s/815cd498eaf391a21cf31ed436fdfb4dfc6922d641002356b4fe73eba92f99b6" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:41:03.300027 systemd[1]: Started cri-containerd-5978e095e6e108c79276e6175bf853ba1319be38801eac2a440a2e0f5f00cd9d.scope - libcontainer container 5978e095e6e108c79276e6175bf853ba1319be38801eac2a440a2e0f5f00cd9d. Jul 15 04:41:03.344726 containerd[1889]: time="2025-07-15T04:41:03.344230237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6797759d77-x4sdx,Uid:dfe936e8-bd17-4fe6-8003-485f085677d2,Namespace:calico-system,Attempt:0,} returns sandbox id \"5978e095e6e108c79276e6175bf853ba1319be38801eac2a440a2e0f5f00cd9d\"" Jul 15 04:41:03.347570 containerd[1889]: time="2025-07-15T04:41:03.347533537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 15 04:41:03.485605 systemd-networkd[1479]: vxlan.calico: Link UP Jul 15 04:41:03.485615 systemd-networkd[1479]: vxlan.calico: Gained carrier Jul 15 04:41:04.474121 kubelet[3397]: I0715 04:41:04.474077 3397 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7d75f8f-50ff-438d-b84d-f23bd8cfd99f" path="/var/lib/kubelet/pods/b7d75f8f-50ff-438d-b84d-f23bd8cfd99f/volumes" Jul 15 04:41:04.676933 systemd-networkd[1479]: cali18f915f91c0: Gained IPv6LL Jul 15 04:41:04.875764 containerd[1889]: time="2025-07-15T04:41:04.875480748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:04.879430 containerd[1889]: time="2025-07-15T04:41:04.879383068Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 15 04:41:04.885010 containerd[1889]: time="2025-07-15T04:41:04.884939599Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:04.891506 containerd[1889]: time="2025-07-15T04:41:04.891429026Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:04.892048 containerd[1889]: time="2025-07-15T04:41:04.892019934Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.544381746s" Jul 15 04:41:04.892158 containerd[1889]: time="2025-07-15T04:41:04.892142299Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 15 04:41:04.894682 containerd[1889]: time="2025-07-15T04:41:04.894646138Z" level=info msg="CreateContainer within sandbox \"5978e095e6e108c79276e6175bf853ba1319be38801eac2a440a2e0f5f00cd9d\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 15 04:41:04.945660 containerd[1889]: time="2025-07-15T04:41:04.944875599Z" level=info msg="Container 8a47b66b526bf2cdd263fe67cf301529ca289f8e18c3dd05a1a912a0f11632ad: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:41:04.983104 containerd[1889]: time="2025-07-15T04:41:04.983052735Z" level=info msg="CreateContainer within sandbox \"5978e095e6e108c79276e6175bf853ba1319be38801eac2a440a2e0f5f00cd9d\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"8a47b66b526bf2cdd263fe67cf301529ca289f8e18c3dd05a1a912a0f11632ad\"" Jul 15 04:41:04.983986 containerd[1889]: time="2025-07-15T04:41:04.983528223Z" level=info msg="StartContainer for \"8a47b66b526bf2cdd263fe67cf301529ca289f8e18c3dd05a1a912a0f11632ad\"" Jul 15 04:41:04.984902 containerd[1889]: time="2025-07-15T04:41:04.984874006Z" level=info msg="connecting to shim 8a47b66b526bf2cdd263fe67cf301529ca289f8e18c3dd05a1a912a0f11632ad" address="unix:///run/containerd/s/815cd498eaf391a21cf31ed436fdfb4dfc6922d641002356b4fe73eba92f99b6" protocol=ttrpc version=3 Jul 15 04:41:04.997142 systemd-networkd[1479]: vxlan.calico: Gained IPv6LL Jul 15 04:41:05.016922 systemd[1]: Started cri-containerd-8a47b66b526bf2cdd263fe67cf301529ca289f8e18c3dd05a1a912a0f11632ad.scope - libcontainer container 8a47b66b526bf2cdd263fe67cf301529ca289f8e18c3dd05a1a912a0f11632ad. Jul 15 04:41:05.053666 containerd[1889]: time="2025-07-15T04:41:05.053546712Z" level=info msg="StartContainer for \"8a47b66b526bf2cdd263fe67cf301529ca289f8e18c3dd05a1a912a0f11632ad\" returns successfully" Jul 15 04:41:05.055703 containerd[1889]: time="2025-07-15T04:41:05.055673443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 15 04:41:06.763753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1921986754.mount: Deactivated successfully. Jul 15 04:41:06.866198 containerd[1889]: time="2025-07-15T04:41:06.866138508Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:06.869544 containerd[1889]: time="2025-07-15T04:41:06.869482866Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 15 04:41:06.876280 containerd[1889]: time="2025-07-15T04:41:06.876211926Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:06.888999 containerd[1889]: time="2025-07-15T04:41:06.888948566Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:06.889527 containerd[1889]: time="2025-07-15T04:41:06.889252437Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.833217421s" Jul 15 04:41:06.889527 containerd[1889]: time="2025-07-15T04:41:06.889279421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 15 04:41:06.892835 containerd[1889]: time="2025-07-15T04:41:06.892791223Z" level=info msg="CreateContainer within sandbox \"5978e095e6e108c79276e6175bf853ba1319be38801eac2a440a2e0f5f00cd9d\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 15 04:41:06.944895 containerd[1889]: time="2025-07-15T04:41:06.944842231Z" level=info msg="Container 5f179c06104efc053af248d532b3615f8e9c3561d8c1c2e927f059c74f2712ed: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:41:06.949774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3676337328.mount: Deactivated successfully. Jul 15 04:41:06.975047 containerd[1889]: time="2025-07-15T04:41:06.974932457Z" level=info msg="CreateContainer within sandbox \"5978e095e6e108c79276e6175bf853ba1319be38801eac2a440a2e0f5f00cd9d\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"5f179c06104efc053af248d532b3615f8e9c3561d8c1c2e927f059c74f2712ed\"" Jul 15 04:41:06.979886 containerd[1889]: time="2025-07-15T04:41:06.979761665Z" level=info msg="StartContainer for \"5f179c06104efc053af248d532b3615f8e9c3561d8c1c2e927f059c74f2712ed\"" Jul 15 04:41:06.980711 containerd[1889]: time="2025-07-15T04:41:06.980624053Z" level=info msg="connecting to shim 5f179c06104efc053af248d532b3615f8e9c3561d8c1c2e927f059c74f2712ed" address="unix:///run/containerd/s/815cd498eaf391a21cf31ed436fdfb4dfc6922d641002356b4fe73eba92f99b6" protocol=ttrpc version=3 Jul 15 04:41:07.002888 systemd[1]: Started cri-containerd-5f179c06104efc053af248d532b3615f8e9c3561d8c1c2e927f059c74f2712ed.scope - libcontainer container 5f179c06104efc053af248d532b3615f8e9c3561d8c1c2e927f059c74f2712ed. Jul 15 04:41:07.039600 containerd[1889]: time="2025-07-15T04:41:07.039342952Z" level=info msg="StartContainer for \"5f179c06104efc053af248d532b3615f8e9c3561d8c1c2e927f059c74f2712ed\" returns successfully" Jul 15 04:41:07.473394 containerd[1889]: time="2025-07-15T04:41:07.473353720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kmdcq,Uid:22d074d0-17db-4b77-9198-a899ad91ed6c,Namespace:kube-system,Attempt:0,}" Jul 15 04:41:07.583476 systemd-networkd[1479]: calia413fc59808: Link UP Jul 15 04:41:07.584348 systemd-networkd[1479]: calia413fc59808: Gained carrier Jul 15 04:41:07.614333 containerd[1889]: 2025-07-15 04:41:07.520 [INFO][4805] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4396.0.0--n--9104e8bf1a-k8s-coredns--7c65d6cfc9--kmdcq-eth0 coredns-7c65d6cfc9- kube-system 22d074d0-17db-4b77-9198-a899ad91ed6c 814 0 2025-07-15 04:40:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4396.0.0-n-9104e8bf1a coredns-7c65d6cfc9-kmdcq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia413fc59808 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d7da22bc887ea38f7326587102020b16ab7104755341c9310a90dfe17dd7b8a6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kmdcq" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-coredns--7c65d6cfc9--kmdcq-" Jul 15 04:41:07.614333 containerd[1889]: 2025-07-15 04:41:07.521 [INFO][4805] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d7da22bc887ea38f7326587102020b16ab7104755341c9310a90dfe17dd7b8a6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kmdcq" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-coredns--7c65d6cfc9--kmdcq-eth0" Jul 15 04:41:07.614333 containerd[1889]: 2025-07-15 04:41:07.540 [INFO][4818] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d7da22bc887ea38f7326587102020b16ab7104755341c9310a90dfe17dd7b8a6" HandleID="k8s-pod-network.d7da22bc887ea38f7326587102020b16ab7104755341c9310a90dfe17dd7b8a6" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-coredns--7c65d6cfc9--kmdcq-eth0" Jul 15 04:41:07.614553 containerd[1889]: 2025-07-15 04:41:07.540 [INFO][4818] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d7da22bc887ea38f7326587102020b16ab7104755341c9310a90dfe17dd7b8a6" HandleID="k8s-pod-network.d7da22bc887ea38f7326587102020b16ab7104755341c9310a90dfe17dd7b8a6" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-coredns--7c65d6cfc9--kmdcq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3120), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4396.0.0-n-9104e8bf1a", "pod":"coredns-7c65d6cfc9-kmdcq", "timestamp":"2025-07-15 04:41:07.540523798 +0000 UTC"}, Hostname:"ci-4396.0.0-n-9104e8bf1a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:41:07.614553 containerd[1889]: 2025-07-15 04:41:07.540 [INFO][4818] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:41:07.614553 containerd[1889]: 2025-07-15 04:41:07.540 [INFO][4818] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:41:07.614553 containerd[1889]: 2025-07-15 04:41:07.540 [INFO][4818] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4396.0.0-n-9104e8bf1a' Jul 15 04:41:07.614553 containerd[1889]: 2025-07-15 04:41:07.546 [INFO][4818] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d7da22bc887ea38f7326587102020b16ab7104755341c9310a90dfe17dd7b8a6" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:07.614553 containerd[1889]: 2025-07-15 04:41:07.552 [INFO][4818] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:07.614553 containerd[1889]: 2025-07-15 04:41:07.556 [INFO][4818] ipam/ipam.go 511: Trying affinity for 192.168.32.192/26 host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:07.614553 containerd[1889]: 2025-07-15 04:41:07.558 [INFO][4818] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.192/26 host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:07.614553 containerd[1889]: 2025-07-15 04:41:07.560 [INFO][4818] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.192/26 host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:07.614708 containerd[1889]: 2025-07-15 04:41:07.560 [INFO][4818] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.32.192/26 handle="k8s-pod-network.d7da22bc887ea38f7326587102020b16ab7104755341c9310a90dfe17dd7b8a6" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:07.614708 containerd[1889]: 2025-07-15 04:41:07.561 [INFO][4818] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d7da22bc887ea38f7326587102020b16ab7104755341c9310a90dfe17dd7b8a6 Jul 15 04:41:07.614708 containerd[1889]: 2025-07-15 04:41:07.566 [INFO][4818] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.32.192/26 handle="k8s-pod-network.d7da22bc887ea38f7326587102020b16ab7104755341c9310a90dfe17dd7b8a6" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:07.614708 containerd[1889]: 2025-07-15 04:41:07.579 [INFO][4818] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.32.194/26] block=192.168.32.192/26 handle="k8s-pod-network.d7da22bc887ea38f7326587102020b16ab7104755341c9310a90dfe17dd7b8a6" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:07.614708 containerd[1889]: 2025-07-15 04:41:07.579 [INFO][4818] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.194/26] handle="k8s-pod-network.d7da22bc887ea38f7326587102020b16ab7104755341c9310a90dfe17dd7b8a6" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:07.614708 containerd[1889]: 2025-07-15 04:41:07.579 [INFO][4818] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:41:07.614708 containerd[1889]: 2025-07-15 04:41:07.579 [INFO][4818] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.194/26] IPv6=[] ContainerID="d7da22bc887ea38f7326587102020b16ab7104755341c9310a90dfe17dd7b8a6" HandleID="k8s-pod-network.d7da22bc887ea38f7326587102020b16ab7104755341c9310a90dfe17dd7b8a6" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-coredns--7c65d6cfc9--kmdcq-eth0" Jul 15 04:41:07.614878 containerd[1889]: 2025-07-15 04:41:07.581 [INFO][4805] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d7da22bc887ea38f7326587102020b16ab7104755341c9310a90dfe17dd7b8a6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kmdcq" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-coredns--7c65d6cfc9--kmdcq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--9104e8bf1a-k8s-coredns--7c65d6cfc9--kmdcq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"22d074d0-17db-4b77-9198-a899ad91ed6c", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-9104e8bf1a", ContainerID:"", Pod:"coredns-7c65d6cfc9-kmdcq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia413fc59808", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:41:07.614878 containerd[1889]: 2025-07-15 04:41:07.581 [INFO][4805] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.194/32] ContainerID="d7da22bc887ea38f7326587102020b16ab7104755341c9310a90dfe17dd7b8a6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kmdcq" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-coredns--7c65d6cfc9--kmdcq-eth0" Jul 15 04:41:07.614878 containerd[1889]: 2025-07-15 04:41:07.581 [INFO][4805] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia413fc59808 ContainerID="d7da22bc887ea38f7326587102020b16ab7104755341c9310a90dfe17dd7b8a6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kmdcq" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-coredns--7c65d6cfc9--kmdcq-eth0" Jul 15 04:41:07.614878 containerd[1889]: 2025-07-15 04:41:07.584 [INFO][4805] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d7da22bc887ea38f7326587102020b16ab7104755341c9310a90dfe17dd7b8a6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kmdcq" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-coredns--7c65d6cfc9--kmdcq-eth0" Jul 15 04:41:07.614878 containerd[1889]: 2025-07-15 04:41:07.585 [INFO][4805] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d7da22bc887ea38f7326587102020b16ab7104755341c9310a90dfe17dd7b8a6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kmdcq" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-coredns--7c65d6cfc9--kmdcq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--9104e8bf1a-k8s-coredns--7c65d6cfc9--kmdcq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"22d074d0-17db-4b77-9198-a899ad91ed6c", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-9104e8bf1a", ContainerID:"d7da22bc887ea38f7326587102020b16ab7104755341c9310a90dfe17dd7b8a6", Pod:"coredns-7c65d6cfc9-kmdcq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia413fc59808", MAC:"9e:87:be:86:2a:e2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:41:07.614878 containerd[1889]: 2025-07-15 04:41:07.611 [INFO][4805] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d7da22bc887ea38f7326587102020b16ab7104755341c9310a90dfe17dd7b8a6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kmdcq" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-coredns--7c65d6cfc9--kmdcq-eth0" Jul 15 04:41:07.642975 kubelet[3397]: I0715 04:41:07.642910 3397 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6797759d77-x4sdx" podStartSLOduration=2.0982676160000002 podStartE2EDuration="5.642895438s" podCreationTimestamp="2025-07-15 04:41:02 +0000 UTC" firstStartedPulling="2025-07-15 04:41:03.345665183 +0000 UTC m=+34.928866426" lastFinishedPulling="2025-07-15 04:41:06.890293005 +0000 UTC m=+38.473494248" observedRunningTime="2025-07-15 04:41:07.642522478 +0000 UTC m=+39.225723721" watchObservedRunningTime="2025-07-15 04:41:07.642895438 +0000 UTC m=+39.226096681" Jul 15 04:41:07.690594 containerd[1889]: time="2025-07-15T04:41:07.690547400Z" level=info msg="connecting to shim d7da22bc887ea38f7326587102020b16ab7104755341c9310a90dfe17dd7b8a6" address="unix:///run/containerd/s/ba6a34986ee7d43629167deed01fade7f4361330b5862695b75e9599e057e8ce" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:41:07.710957 systemd[1]: Started cri-containerd-d7da22bc887ea38f7326587102020b16ab7104755341c9310a90dfe17dd7b8a6.scope - libcontainer container d7da22bc887ea38f7326587102020b16ab7104755341c9310a90dfe17dd7b8a6. Jul 15 04:41:07.743887 containerd[1889]: time="2025-07-15T04:41:07.743292672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kmdcq,Uid:22d074d0-17db-4b77-9198-a899ad91ed6c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7da22bc887ea38f7326587102020b16ab7104755341c9310a90dfe17dd7b8a6\"" Jul 15 04:41:07.747217 containerd[1889]: time="2025-07-15T04:41:07.747092544Z" level=info msg="CreateContainer within sandbox \"d7da22bc887ea38f7326587102020b16ab7104755341c9310a90dfe17dd7b8a6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 04:41:07.792745 containerd[1889]: time="2025-07-15T04:41:07.792339979Z" level=info msg="Container 4f0087d72ce1e6837e4893958815d1a0012d7c00200b9d9987e99179f2a35b2f: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:41:07.816320 containerd[1889]: time="2025-07-15T04:41:07.816270286Z" level=info msg="CreateContainer within sandbox \"d7da22bc887ea38f7326587102020b16ab7104755341c9310a90dfe17dd7b8a6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4f0087d72ce1e6837e4893958815d1a0012d7c00200b9d9987e99179f2a35b2f\"" Jul 15 04:41:07.817439 containerd[1889]: time="2025-07-15T04:41:07.817337927Z" level=info msg="StartContainer for \"4f0087d72ce1e6837e4893958815d1a0012d7c00200b9d9987e99179f2a35b2f\"" Jul 15 04:41:07.818474 containerd[1889]: time="2025-07-15T04:41:07.818368367Z" level=info msg="connecting to shim 4f0087d72ce1e6837e4893958815d1a0012d7c00200b9d9987e99179f2a35b2f" address="unix:///run/containerd/s/ba6a34986ee7d43629167deed01fade7f4361330b5862695b75e9599e057e8ce" protocol=ttrpc version=3 Jul 15 04:41:07.836875 systemd[1]: Started cri-containerd-4f0087d72ce1e6837e4893958815d1a0012d7c00200b9d9987e99179f2a35b2f.scope - libcontainer container 4f0087d72ce1e6837e4893958815d1a0012d7c00200b9d9987e99179f2a35b2f. Jul 15 04:41:07.867772 containerd[1889]: time="2025-07-15T04:41:07.867690383Z" level=info msg="StartContainer for \"4f0087d72ce1e6837e4893958815d1a0012d7c00200b9d9987e99179f2a35b2f\" returns successfully" Jul 15 04:41:08.475122 containerd[1889]: time="2025-07-15T04:41:08.475013359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-576b877669-tvxw7,Uid:75552206-ca3b-4a57-8b32-0477c8fbc72b,Namespace:calico-system,Attempt:0,}" Jul 15 04:41:08.580886 systemd-networkd[1479]: cali010b94233a1: Link UP Jul 15 04:41:08.581654 systemd-networkd[1479]: cali010b94233a1: Gained carrier Jul 15 04:41:08.597147 containerd[1889]: 2025-07-15 04:41:08.514 [INFO][4915] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4396.0.0--n--9104e8bf1a-k8s-calico--kube--controllers--576b877669--tvxw7-eth0 calico-kube-controllers-576b877669- calico-system 75552206-ca3b-4a57-8b32-0477c8fbc72b 816 0 2025-07-15 04:40:46 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:576b877669 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4396.0.0-n-9104e8bf1a calico-kube-controllers-576b877669-tvxw7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali010b94233a1 [] [] }} ContainerID="560788fc9a5e3cd7dbbab08390c5fba19427169c36c742a6474dd7bf2a883c59" Namespace="calico-system" Pod="calico-kube-controllers-576b877669-tvxw7" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--kube--controllers--576b877669--tvxw7-" Jul 15 04:41:08.597147 containerd[1889]: 2025-07-15 04:41:08.514 [INFO][4915] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="560788fc9a5e3cd7dbbab08390c5fba19427169c36c742a6474dd7bf2a883c59" Namespace="calico-system" Pod="calico-kube-controllers-576b877669-tvxw7" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--kube--controllers--576b877669--tvxw7-eth0" Jul 15 04:41:08.597147 containerd[1889]: 2025-07-15 04:41:08.535 [INFO][4928] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="560788fc9a5e3cd7dbbab08390c5fba19427169c36c742a6474dd7bf2a883c59" HandleID="k8s-pod-network.560788fc9a5e3cd7dbbab08390c5fba19427169c36c742a6474dd7bf2a883c59" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-calico--kube--controllers--576b877669--tvxw7-eth0" Jul 15 04:41:08.597147 containerd[1889]: 2025-07-15 04:41:08.535 [INFO][4928] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="560788fc9a5e3cd7dbbab08390c5fba19427169c36c742a6474dd7bf2a883c59" HandleID="k8s-pod-network.560788fc9a5e3cd7dbbab08390c5fba19427169c36c742a6474dd7bf2a883c59" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-calico--kube--controllers--576b877669--tvxw7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b610), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4396.0.0-n-9104e8bf1a", "pod":"calico-kube-controllers-576b877669-tvxw7", "timestamp":"2025-07-15 04:41:08.535501098 +0000 UTC"}, Hostname:"ci-4396.0.0-n-9104e8bf1a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:41:08.597147 containerd[1889]: 2025-07-15 04:41:08.535 [INFO][4928] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:41:08.597147 containerd[1889]: 2025-07-15 04:41:08.535 [INFO][4928] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:41:08.597147 containerd[1889]: 2025-07-15 04:41:08.535 [INFO][4928] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4396.0.0-n-9104e8bf1a' Jul 15 04:41:08.597147 containerd[1889]: 2025-07-15 04:41:08.542 [INFO][4928] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.560788fc9a5e3cd7dbbab08390c5fba19427169c36c742a6474dd7bf2a883c59" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:08.597147 containerd[1889]: 2025-07-15 04:41:08.548 [INFO][4928] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:08.597147 containerd[1889]: 2025-07-15 04:41:08.555 [INFO][4928] ipam/ipam.go 511: Trying affinity for 192.168.32.192/26 host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:08.597147 containerd[1889]: 2025-07-15 04:41:08.556 [INFO][4928] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.192/26 host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:08.597147 containerd[1889]: 2025-07-15 04:41:08.558 [INFO][4928] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.192/26 host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:08.597147 containerd[1889]: 2025-07-15 04:41:08.558 [INFO][4928] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.32.192/26 handle="k8s-pod-network.560788fc9a5e3cd7dbbab08390c5fba19427169c36c742a6474dd7bf2a883c59" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:08.597147 containerd[1889]: 2025-07-15 04:41:08.560 [INFO][4928] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.560788fc9a5e3cd7dbbab08390c5fba19427169c36c742a6474dd7bf2a883c59 Jul 15 04:41:08.597147 containerd[1889]: 2025-07-15 04:41:08.565 [INFO][4928] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.32.192/26 handle="k8s-pod-network.560788fc9a5e3cd7dbbab08390c5fba19427169c36c742a6474dd7bf2a883c59" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:08.597147 containerd[1889]: 2025-07-15 04:41:08.575 [INFO][4928] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.32.195/26] block=192.168.32.192/26 handle="k8s-pod-network.560788fc9a5e3cd7dbbab08390c5fba19427169c36c742a6474dd7bf2a883c59" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:08.597147 containerd[1889]: 2025-07-15 04:41:08.575 [INFO][4928] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.195/26] handle="k8s-pod-network.560788fc9a5e3cd7dbbab08390c5fba19427169c36c742a6474dd7bf2a883c59" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:08.597147 containerd[1889]: 2025-07-15 04:41:08.575 [INFO][4928] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:41:08.597147 containerd[1889]: 2025-07-15 04:41:08.575 [INFO][4928] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.195/26] IPv6=[] ContainerID="560788fc9a5e3cd7dbbab08390c5fba19427169c36c742a6474dd7bf2a883c59" HandleID="k8s-pod-network.560788fc9a5e3cd7dbbab08390c5fba19427169c36c742a6474dd7bf2a883c59" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-calico--kube--controllers--576b877669--tvxw7-eth0" Jul 15 04:41:08.598354 containerd[1889]: 2025-07-15 04:41:08.578 [INFO][4915] cni-plugin/k8s.go 418: Populated endpoint ContainerID="560788fc9a5e3cd7dbbab08390c5fba19427169c36c742a6474dd7bf2a883c59" Namespace="calico-system" Pod="calico-kube-controllers-576b877669-tvxw7" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--kube--controllers--576b877669--tvxw7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--9104e8bf1a-k8s-calico--kube--controllers--576b877669--tvxw7-eth0", GenerateName:"calico-kube-controllers-576b877669-", Namespace:"calico-system", SelfLink:"", UID:"75552206-ca3b-4a57-8b32-0477c8fbc72b", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"576b877669", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-9104e8bf1a", ContainerID:"", Pod:"calico-kube-controllers-576b877669-tvxw7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.32.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali010b94233a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:41:08.598354 containerd[1889]: 2025-07-15 04:41:08.578 [INFO][4915] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.195/32] ContainerID="560788fc9a5e3cd7dbbab08390c5fba19427169c36c742a6474dd7bf2a883c59" Namespace="calico-system" Pod="calico-kube-controllers-576b877669-tvxw7" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--kube--controllers--576b877669--tvxw7-eth0" Jul 15 04:41:08.598354 containerd[1889]: 2025-07-15 04:41:08.578 [INFO][4915] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali010b94233a1 ContainerID="560788fc9a5e3cd7dbbab08390c5fba19427169c36c742a6474dd7bf2a883c59" Namespace="calico-system" Pod="calico-kube-controllers-576b877669-tvxw7" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--kube--controllers--576b877669--tvxw7-eth0" Jul 15 04:41:08.598354 containerd[1889]: 2025-07-15 04:41:08.582 [INFO][4915] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="560788fc9a5e3cd7dbbab08390c5fba19427169c36c742a6474dd7bf2a883c59" Namespace="calico-system" Pod="calico-kube-controllers-576b877669-tvxw7" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--kube--controllers--576b877669--tvxw7-eth0" Jul 15 04:41:08.598354 containerd[1889]: 2025-07-15 04:41:08.582 [INFO][4915] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="560788fc9a5e3cd7dbbab08390c5fba19427169c36c742a6474dd7bf2a883c59" Namespace="calico-system" Pod="calico-kube-controllers-576b877669-tvxw7" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--kube--controllers--576b877669--tvxw7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--9104e8bf1a-k8s-calico--kube--controllers--576b877669--tvxw7-eth0", GenerateName:"calico-kube-controllers-576b877669-", Namespace:"calico-system", SelfLink:"", UID:"75552206-ca3b-4a57-8b32-0477c8fbc72b", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"576b877669", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-9104e8bf1a", ContainerID:"560788fc9a5e3cd7dbbab08390c5fba19427169c36c742a6474dd7bf2a883c59", Pod:"calico-kube-controllers-576b877669-tvxw7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.32.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali010b94233a1", MAC:"ae:99:77:e7:0c:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:41:08.598354 containerd[1889]: 2025-07-15 04:41:08.594 [INFO][4915] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="560788fc9a5e3cd7dbbab08390c5fba19427169c36c742a6474dd7bf2a883c59" Namespace="calico-system" Pod="calico-kube-controllers-576b877669-tvxw7" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--kube--controllers--576b877669--tvxw7-eth0" Jul 15 04:41:08.674777 kubelet[3397]: I0715 04:41:08.673513 3397 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-kmdcq" podStartSLOduration=34.673494405 podStartE2EDuration="34.673494405s" podCreationTimestamp="2025-07-15 04:40:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:41:08.645336039 +0000 UTC m=+40.228537314" watchObservedRunningTime="2025-07-15 04:41:08.673494405 +0000 UTC m=+40.256695656" Jul 15 04:41:08.684031 containerd[1889]: time="2025-07-15T04:41:08.683704314Z" level=info msg="connecting to shim 560788fc9a5e3cd7dbbab08390c5fba19427169c36c742a6474dd7bf2a883c59" address="unix:///run/containerd/s/7f1ebbb8438f0af1e2506841b797294f5a00f141fd53b638a5d5fc01b1a1e9cf" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:41:08.709885 systemd[1]: Started cri-containerd-560788fc9a5e3cd7dbbab08390c5fba19427169c36c742a6474dd7bf2a883c59.scope - libcontainer container 560788fc9a5e3cd7dbbab08390c5fba19427169c36c742a6474dd7bf2a883c59. Jul 15 04:41:08.747918 containerd[1889]: time="2025-07-15T04:41:08.747285189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-576b877669-tvxw7,Uid:75552206-ca3b-4a57-8b32-0477c8fbc72b,Namespace:calico-system,Attempt:0,} returns sandbox id \"560788fc9a5e3cd7dbbab08390c5fba19427169c36c742a6474dd7bf2a883c59\"" Jul 15 04:41:08.750747 containerd[1889]: time="2025-07-15T04:41:08.750519928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 15 04:41:09.284966 systemd-networkd[1479]: calia413fc59808: Gained IPv6LL Jul 15 04:41:09.473183 containerd[1889]: time="2025-07-15T04:41:09.473132867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kzd65,Uid:63a73d2d-8d64-4a1f-9338-cce59b96a36a,Namespace:kube-system,Attempt:0,}" Jul 15 04:41:09.576325 systemd-networkd[1479]: cali6a9f6b84a00: Link UP Jul 15 04:41:09.577099 systemd-networkd[1479]: cali6a9f6b84a00: Gained carrier Jul 15 04:41:09.599998 containerd[1889]: 2025-07-15 04:41:09.512 [INFO][5003] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4396.0.0--n--9104e8bf1a-k8s-coredns--7c65d6cfc9--kzd65-eth0 coredns-7c65d6cfc9- kube-system 63a73d2d-8d64-4a1f-9338-cce59b96a36a 803 0 2025-07-15 04:40:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4396.0.0-n-9104e8bf1a coredns-7c65d6cfc9-kzd65 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6a9f6b84a00 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0987af2be0ee4c62cc1b51de6dd3bf8aa28e59edd636aa1840a5d30ee7fd7031" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kzd65" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-coredns--7c65d6cfc9--kzd65-" Jul 15 04:41:09.599998 containerd[1889]: 2025-07-15 04:41:09.512 [INFO][5003] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0987af2be0ee4c62cc1b51de6dd3bf8aa28e59edd636aa1840a5d30ee7fd7031" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kzd65" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-coredns--7c65d6cfc9--kzd65-eth0" Jul 15 04:41:09.599998 containerd[1889]: 2025-07-15 04:41:09.532 [INFO][5014] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0987af2be0ee4c62cc1b51de6dd3bf8aa28e59edd636aa1840a5d30ee7fd7031" HandleID="k8s-pod-network.0987af2be0ee4c62cc1b51de6dd3bf8aa28e59edd636aa1840a5d30ee7fd7031" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-coredns--7c65d6cfc9--kzd65-eth0" Jul 15 04:41:09.599998 containerd[1889]: 2025-07-15 04:41:09.533 [INFO][5014] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0987af2be0ee4c62cc1b51de6dd3bf8aa28e59edd636aa1840a5d30ee7fd7031" HandleID="k8s-pod-network.0987af2be0ee4c62cc1b51de6dd3bf8aa28e59edd636aa1840a5d30ee7fd7031" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-coredns--7c65d6cfc9--kzd65-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003136b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4396.0.0-n-9104e8bf1a", "pod":"coredns-7c65d6cfc9-kzd65", "timestamp":"2025-07-15 04:41:09.532432035 +0000 UTC"}, Hostname:"ci-4396.0.0-n-9104e8bf1a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:41:09.599998 containerd[1889]: 2025-07-15 04:41:09.534 [INFO][5014] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:41:09.599998 containerd[1889]: 2025-07-15 04:41:09.534 [INFO][5014] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:41:09.599998 containerd[1889]: 2025-07-15 04:41:09.534 [INFO][5014] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4396.0.0-n-9104e8bf1a' Jul 15 04:41:09.599998 containerd[1889]: 2025-07-15 04:41:09.540 [INFO][5014] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0987af2be0ee4c62cc1b51de6dd3bf8aa28e59edd636aa1840a5d30ee7fd7031" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:09.599998 containerd[1889]: 2025-07-15 04:41:09.544 [INFO][5014] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:09.599998 containerd[1889]: 2025-07-15 04:41:09.549 [INFO][5014] ipam/ipam.go 511: Trying affinity for 192.168.32.192/26 host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:09.599998 containerd[1889]: 2025-07-15 04:41:09.552 [INFO][5014] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.192/26 host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:09.599998 containerd[1889]: 2025-07-15 04:41:09.555 [INFO][5014] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.192/26 host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:09.599998 containerd[1889]: 2025-07-15 04:41:09.555 [INFO][5014] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.32.192/26 handle="k8s-pod-network.0987af2be0ee4c62cc1b51de6dd3bf8aa28e59edd636aa1840a5d30ee7fd7031" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:09.599998 containerd[1889]: 2025-07-15 04:41:09.556 [INFO][5014] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0987af2be0ee4c62cc1b51de6dd3bf8aa28e59edd636aa1840a5d30ee7fd7031 Jul 15 04:41:09.599998 containerd[1889]: 2025-07-15 04:41:09.564 [INFO][5014] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.32.192/26 handle="k8s-pod-network.0987af2be0ee4c62cc1b51de6dd3bf8aa28e59edd636aa1840a5d30ee7fd7031" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:09.599998 containerd[1889]: 2025-07-15 04:41:09.572 [INFO][5014] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.32.196/26] block=192.168.32.192/26 handle="k8s-pod-network.0987af2be0ee4c62cc1b51de6dd3bf8aa28e59edd636aa1840a5d30ee7fd7031" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:09.599998 containerd[1889]: 2025-07-15 04:41:09.572 [INFO][5014] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.196/26] handle="k8s-pod-network.0987af2be0ee4c62cc1b51de6dd3bf8aa28e59edd636aa1840a5d30ee7fd7031" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:09.599998 containerd[1889]: 2025-07-15 04:41:09.572 [INFO][5014] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:41:09.599998 containerd[1889]: 2025-07-15 04:41:09.572 [INFO][5014] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.196/26] IPv6=[] ContainerID="0987af2be0ee4c62cc1b51de6dd3bf8aa28e59edd636aa1840a5d30ee7fd7031" HandleID="k8s-pod-network.0987af2be0ee4c62cc1b51de6dd3bf8aa28e59edd636aa1840a5d30ee7fd7031" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-coredns--7c65d6cfc9--kzd65-eth0" Jul 15 04:41:09.600478 containerd[1889]: 2025-07-15 04:41:09.573 [INFO][5003] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0987af2be0ee4c62cc1b51de6dd3bf8aa28e59edd636aa1840a5d30ee7fd7031" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kzd65" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-coredns--7c65d6cfc9--kzd65-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--9104e8bf1a-k8s-coredns--7c65d6cfc9--kzd65-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"63a73d2d-8d64-4a1f-9338-cce59b96a36a", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-9104e8bf1a", ContainerID:"", Pod:"coredns-7c65d6cfc9-kzd65", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6a9f6b84a00", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:41:09.600478 containerd[1889]: 2025-07-15 04:41:09.574 [INFO][5003] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.196/32] ContainerID="0987af2be0ee4c62cc1b51de6dd3bf8aa28e59edd636aa1840a5d30ee7fd7031" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kzd65" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-coredns--7c65d6cfc9--kzd65-eth0" Jul 15 04:41:09.600478 containerd[1889]: 2025-07-15 04:41:09.574 [INFO][5003] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6a9f6b84a00 ContainerID="0987af2be0ee4c62cc1b51de6dd3bf8aa28e59edd636aa1840a5d30ee7fd7031" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kzd65" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-coredns--7c65d6cfc9--kzd65-eth0" Jul 15 04:41:09.600478 containerd[1889]: 2025-07-15 04:41:09.577 [INFO][5003] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0987af2be0ee4c62cc1b51de6dd3bf8aa28e59edd636aa1840a5d30ee7fd7031" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kzd65" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-coredns--7c65d6cfc9--kzd65-eth0" Jul 15 04:41:09.600478 containerd[1889]: 2025-07-15 04:41:09.578 [INFO][5003] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0987af2be0ee4c62cc1b51de6dd3bf8aa28e59edd636aa1840a5d30ee7fd7031" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kzd65" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-coredns--7c65d6cfc9--kzd65-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--9104e8bf1a-k8s-coredns--7c65d6cfc9--kzd65-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"63a73d2d-8d64-4a1f-9338-cce59b96a36a", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-9104e8bf1a", ContainerID:"0987af2be0ee4c62cc1b51de6dd3bf8aa28e59edd636aa1840a5d30ee7fd7031", Pod:"coredns-7c65d6cfc9-kzd65", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6a9f6b84a00", MAC:"3a:e2:69:10:af:f4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:41:09.600478 containerd[1889]: 2025-07-15 04:41:09.595 [INFO][5003] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0987af2be0ee4c62cc1b51de6dd3bf8aa28e59edd636aa1840a5d30ee7fd7031" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kzd65" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-coredns--7c65d6cfc9--kzd65-eth0" Jul 15 04:41:09.666871 containerd[1889]: time="2025-07-15T04:41:09.666828551Z" level=info msg="connecting to shim 0987af2be0ee4c62cc1b51de6dd3bf8aa28e59edd636aa1840a5d30ee7fd7031" address="unix:///run/containerd/s/ca4a8eb1b9f1a727db2822f44d7279fffc4be278b439e5719e303642a499b64f" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:41:09.689875 systemd[1]: Started cri-containerd-0987af2be0ee4c62cc1b51de6dd3bf8aa28e59edd636aa1840a5d30ee7fd7031.scope - libcontainer container 0987af2be0ee4c62cc1b51de6dd3bf8aa28e59edd636aa1840a5d30ee7fd7031. Jul 15 04:41:09.737406 containerd[1889]: time="2025-07-15T04:41:09.737364253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kzd65,Uid:63a73d2d-8d64-4a1f-9338-cce59b96a36a,Namespace:kube-system,Attempt:0,} returns sandbox id \"0987af2be0ee4c62cc1b51de6dd3bf8aa28e59edd636aa1840a5d30ee7fd7031\"" Jul 15 04:41:09.740662 containerd[1889]: time="2025-07-15T04:41:09.740620245Z" level=info msg="CreateContainer within sandbox \"0987af2be0ee4c62cc1b51de6dd3bf8aa28e59edd636aa1840a5d30ee7fd7031\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 04:41:09.801048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3003810093.mount: Deactivated successfully. Jul 15 04:41:09.804053 containerd[1889]: time="2025-07-15T04:41:09.803995077Z" level=info msg="Container 11c0b94cd6e135e09d57a264911a9ec80720ca9327ea623d89cafe6b5c18504d: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:41:09.855190 containerd[1889]: time="2025-07-15T04:41:09.855073045Z" level=info msg="CreateContainer within sandbox \"0987af2be0ee4c62cc1b51de6dd3bf8aa28e59edd636aa1840a5d30ee7fd7031\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"11c0b94cd6e135e09d57a264911a9ec80720ca9327ea623d89cafe6b5c18504d\"" Jul 15 04:41:09.856142 containerd[1889]: time="2025-07-15T04:41:09.856107728Z" level=info msg="StartContainer for \"11c0b94cd6e135e09d57a264911a9ec80720ca9327ea623d89cafe6b5c18504d\"" Jul 15 04:41:09.856944 containerd[1889]: time="2025-07-15T04:41:09.856916372Z" level=info msg="connecting to shim 11c0b94cd6e135e09d57a264911a9ec80720ca9327ea623d89cafe6b5c18504d" address="unix:///run/containerd/s/ca4a8eb1b9f1a727db2822f44d7279fffc4be278b439e5719e303642a499b64f" protocol=ttrpc version=3 Jul 15 04:41:09.871856 systemd[1]: Started cri-containerd-11c0b94cd6e135e09d57a264911a9ec80720ca9327ea623d89cafe6b5c18504d.scope - libcontainer container 11c0b94cd6e135e09d57a264911a9ec80720ca9327ea623d89cafe6b5c18504d. Jul 15 04:41:09.905674 containerd[1889]: time="2025-07-15T04:41:09.905631114Z" level=info msg="StartContainer for \"11c0b94cd6e135e09d57a264911a9ec80720ca9327ea623d89cafe6b5c18504d\" returns successfully" Jul 15 04:41:10.308915 systemd-networkd[1479]: cali010b94233a1: Gained IPv6LL Jul 15 04:41:10.473663 containerd[1889]: time="2025-07-15T04:41:10.473621346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f76686786-25zvj,Uid:6a8e1578-4086-49fa-9074-cde2d003aafe,Namespace:calico-apiserver,Attempt:0,}" Jul 15 04:41:10.475554 containerd[1889]: time="2025-07-15T04:41:10.475448153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65866b6cfd-95btq,Uid:d891310b-e597-4d2a-831e-002c63441df3,Namespace:calico-apiserver,Attempt:0,}" Jul 15 04:41:10.652235 systemd-networkd[1479]: calia57822b61d8: Link UP Jul 15 04:41:10.654305 systemd-networkd[1479]: calia57822b61d8: Gained carrier Jul 15 04:41:10.666368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2874737391.mount: Deactivated successfully. Jul 15 04:41:10.670532 kubelet[3397]: I0715 04:41:10.670053 3397 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-kzd65" podStartSLOduration=36.670031417 podStartE2EDuration="36.670031417s" podCreationTimestamp="2025-07-15 04:40:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:41:10.665742805 +0000 UTC m=+42.248944056" watchObservedRunningTime="2025-07-15 04:41:10.670031417 +0000 UTC m=+42.253232660" Jul 15 04:41:10.706161 containerd[1889]: 2025-07-15 04:41:10.550 [INFO][5116] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--f76686786--25zvj-eth0 calico-apiserver-f76686786- calico-apiserver 6a8e1578-4086-49fa-9074-cde2d003aafe 818 0 2025-07-15 04:40:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f76686786 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4396.0.0-n-9104e8bf1a calico-apiserver-f76686786-25zvj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia57822b61d8 [] [] }} ContainerID="9da2e51d27d33650978815eb4d36d8e642dcafefe4427b36b968e13e09151815" Namespace="calico-apiserver" Pod="calico-apiserver-f76686786-25zvj" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--f76686786--25zvj-" Jul 15 04:41:10.706161 containerd[1889]: 2025-07-15 04:41:10.550 [INFO][5116] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9da2e51d27d33650978815eb4d36d8e642dcafefe4427b36b968e13e09151815" Namespace="calico-apiserver" Pod="calico-apiserver-f76686786-25zvj" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--f76686786--25zvj-eth0" Jul 15 04:41:10.706161 containerd[1889]: 2025-07-15 04:41:10.585 [INFO][5141] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9da2e51d27d33650978815eb4d36d8e642dcafefe4427b36b968e13e09151815" HandleID="k8s-pod-network.9da2e51d27d33650978815eb4d36d8e642dcafefe4427b36b968e13e09151815" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--f76686786--25zvj-eth0" Jul 15 04:41:10.706161 containerd[1889]: 2025-07-15 04:41:10.585 [INFO][5141] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9da2e51d27d33650978815eb4d36d8e642dcafefe4427b36b968e13e09151815" HandleID="k8s-pod-network.9da2e51d27d33650978815eb4d36d8e642dcafefe4427b36b968e13e09151815" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--f76686786--25zvj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3600), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4396.0.0-n-9104e8bf1a", "pod":"calico-apiserver-f76686786-25zvj", "timestamp":"2025-07-15 04:41:10.585279625 +0000 UTC"}, Hostname:"ci-4396.0.0-n-9104e8bf1a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:41:10.706161 containerd[1889]: 2025-07-15 04:41:10.585 [INFO][5141] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:41:10.706161 containerd[1889]: 2025-07-15 04:41:10.585 [INFO][5141] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:41:10.706161 containerd[1889]: 2025-07-15 04:41:10.585 [INFO][5141] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4396.0.0-n-9104e8bf1a' Jul 15 04:41:10.706161 containerd[1889]: 2025-07-15 04:41:10.595 [INFO][5141] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9da2e51d27d33650978815eb4d36d8e642dcafefe4427b36b968e13e09151815" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:10.706161 containerd[1889]: 2025-07-15 04:41:10.600 [INFO][5141] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:10.706161 containerd[1889]: 2025-07-15 04:41:10.605 [INFO][5141] ipam/ipam.go 511: Trying affinity for 192.168.32.192/26 host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:10.706161 containerd[1889]: 2025-07-15 04:41:10.607 [INFO][5141] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.192/26 host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:10.706161 containerd[1889]: 2025-07-15 04:41:10.612 [INFO][5141] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.192/26 host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:10.706161 containerd[1889]: 2025-07-15 04:41:10.614 [INFO][5141] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.32.192/26 handle="k8s-pod-network.9da2e51d27d33650978815eb4d36d8e642dcafefe4427b36b968e13e09151815" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:10.706161 containerd[1889]: 2025-07-15 04:41:10.616 [INFO][5141] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9da2e51d27d33650978815eb4d36d8e642dcafefe4427b36b968e13e09151815 Jul 15 04:41:10.706161 containerd[1889]: 2025-07-15 04:41:10.623 [INFO][5141] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.32.192/26 handle="k8s-pod-network.9da2e51d27d33650978815eb4d36d8e642dcafefe4427b36b968e13e09151815" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:10.706161 containerd[1889]: 2025-07-15 04:41:10.633 [INFO][5141] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.32.197/26] block=192.168.32.192/26 handle="k8s-pod-network.9da2e51d27d33650978815eb4d36d8e642dcafefe4427b36b968e13e09151815" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:10.706161 containerd[1889]: 2025-07-15 04:41:10.633 [INFO][5141] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.197/26] handle="k8s-pod-network.9da2e51d27d33650978815eb4d36d8e642dcafefe4427b36b968e13e09151815" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:10.706161 containerd[1889]: 2025-07-15 04:41:10.633 [INFO][5141] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:41:10.706161 containerd[1889]: 2025-07-15 04:41:10.633 [INFO][5141] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.197/26] IPv6=[] ContainerID="9da2e51d27d33650978815eb4d36d8e642dcafefe4427b36b968e13e09151815" HandleID="k8s-pod-network.9da2e51d27d33650978815eb4d36d8e642dcafefe4427b36b968e13e09151815" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--f76686786--25zvj-eth0" Jul 15 04:41:10.706609 containerd[1889]: 2025-07-15 04:41:10.637 [INFO][5116] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9da2e51d27d33650978815eb4d36d8e642dcafefe4427b36b968e13e09151815" Namespace="calico-apiserver" Pod="calico-apiserver-f76686786-25zvj" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--f76686786--25zvj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--f76686786--25zvj-eth0", GenerateName:"calico-apiserver-f76686786-", Namespace:"calico-apiserver", SelfLink:"", UID:"6a8e1578-4086-49fa-9074-cde2d003aafe", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f76686786", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-9104e8bf1a", ContainerID:"", Pod:"calico-apiserver-f76686786-25zvj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia57822b61d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:41:10.706609 containerd[1889]: 2025-07-15 04:41:10.637 [INFO][5116] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.197/32] ContainerID="9da2e51d27d33650978815eb4d36d8e642dcafefe4427b36b968e13e09151815" Namespace="calico-apiserver" Pod="calico-apiserver-f76686786-25zvj" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--f76686786--25zvj-eth0" Jul 15 04:41:10.706609 containerd[1889]: 2025-07-15 04:41:10.637 [INFO][5116] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia57822b61d8 ContainerID="9da2e51d27d33650978815eb4d36d8e642dcafefe4427b36b968e13e09151815" Namespace="calico-apiserver" Pod="calico-apiserver-f76686786-25zvj" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--f76686786--25zvj-eth0" Jul 15 04:41:10.706609 containerd[1889]: 2025-07-15 04:41:10.656 [INFO][5116] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9da2e51d27d33650978815eb4d36d8e642dcafefe4427b36b968e13e09151815" Namespace="calico-apiserver" Pod="calico-apiserver-f76686786-25zvj" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--f76686786--25zvj-eth0" Jul 15 04:41:10.706609 containerd[1889]: 2025-07-15 04:41:10.664 [INFO][5116] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9da2e51d27d33650978815eb4d36d8e642dcafefe4427b36b968e13e09151815" Namespace="calico-apiserver" Pod="calico-apiserver-f76686786-25zvj" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--f76686786--25zvj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--f76686786--25zvj-eth0", GenerateName:"calico-apiserver-f76686786-", Namespace:"calico-apiserver", SelfLink:"", UID:"6a8e1578-4086-49fa-9074-cde2d003aafe", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f76686786", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-9104e8bf1a", ContainerID:"9da2e51d27d33650978815eb4d36d8e642dcafefe4427b36b968e13e09151815", Pod:"calico-apiserver-f76686786-25zvj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia57822b61d8", MAC:"6a:b2:37:86:27:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:41:10.706609 containerd[1889]: 2025-07-15 04:41:10.700 [INFO][5116] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9da2e51d27d33650978815eb4d36d8e642dcafefe4427b36b968e13e09151815" Namespace="calico-apiserver" Pod="calico-apiserver-f76686786-25zvj" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--f76686786--25zvj-eth0" Jul 15 04:41:10.810580 containerd[1889]: time="2025-07-15T04:41:10.809695580Z" level=info msg="connecting to shim 9da2e51d27d33650978815eb4d36d8e642dcafefe4427b36b968e13e09151815" address="unix:///run/containerd/s/45164c4f51b1b3fc99e52699423fe92581880d50b56dec2d2c2edee1c16c516e" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:41:10.822808 systemd-networkd[1479]: cali9a946bc9ce2: Link UP Jul 15 04:41:10.823001 systemd-networkd[1479]: cali9a946bc9ce2: Gained carrier Jul 15 04:41:10.851517 containerd[1889]: 2025-07-15 04:41:10.581 [INFO][5125] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--95btq-eth0 calico-apiserver-65866b6cfd- calico-apiserver d891310b-e597-4d2a-831e-002c63441df3 812 0 2025-07-15 04:40:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65866b6cfd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4396.0.0-n-9104e8bf1a calico-apiserver-65866b6cfd-95btq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9a946bc9ce2 [] [] }} ContainerID="36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" Namespace="calico-apiserver" Pod="calico-apiserver-65866b6cfd-95btq" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--95btq-" Jul 15 04:41:10.851517 containerd[1889]: 2025-07-15 04:41:10.582 [INFO][5125] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" Namespace="calico-apiserver" Pod="calico-apiserver-65866b6cfd-95btq" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--95btq-eth0" Jul 15 04:41:10.851517 containerd[1889]: 2025-07-15 04:41:10.630 [INFO][5149] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" HandleID="k8s-pod-network.36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--95btq-eth0" Jul 15 04:41:10.851517 containerd[1889]: 2025-07-15 04:41:10.630 [INFO][5149] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" HandleID="k8s-pod-network.36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--95btq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b6b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4396.0.0-n-9104e8bf1a", "pod":"calico-apiserver-65866b6cfd-95btq", "timestamp":"2025-07-15 04:41:10.630063712 +0000 UTC"}, Hostname:"ci-4396.0.0-n-9104e8bf1a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:41:10.851517 containerd[1889]: 2025-07-15 04:41:10.630 [INFO][5149] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:41:10.851517 containerd[1889]: 2025-07-15 04:41:10.633 [INFO][5149] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:41:10.851517 containerd[1889]: 2025-07-15 04:41:10.633 [INFO][5149] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4396.0.0-n-9104e8bf1a' Jul 15 04:41:10.851517 containerd[1889]: 2025-07-15 04:41:10.698 [INFO][5149] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:10.851517 containerd[1889]: 2025-07-15 04:41:10.717 [INFO][5149] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:10.851517 containerd[1889]: 2025-07-15 04:41:10.743 [INFO][5149] ipam/ipam.go 511: Trying affinity for 192.168.32.192/26 host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:10.851517 containerd[1889]: 2025-07-15 04:41:10.762 [INFO][5149] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.192/26 host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:10.851517 containerd[1889]: 2025-07-15 04:41:10.767 [INFO][5149] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.192/26 host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:10.851517 containerd[1889]: 2025-07-15 04:41:10.767 [INFO][5149] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.32.192/26 handle="k8s-pod-network.36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:10.851517 containerd[1889]: 2025-07-15 04:41:10.771 [INFO][5149] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b Jul 15 04:41:10.851517 containerd[1889]: 2025-07-15 04:41:10.783 [INFO][5149] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.32.192/26 handle="k8s-pod-network.36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:10.851517 containerd[1889]: 2025-07-15 04:41:10.796 [INFO][5149] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.32.198/26] block=192.168.32.192/26 handle="k8s-pod-network.36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:10.851517 containerd[1889]: 2025-07-15 04:41:10.796 [INFO][5149] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.198/26] handle="k8s-pod-network.36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:10.851517 containerd[1889]: 2025-07-15 04:41:10.796 [INFO][5149] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:41:10.851517 containerd[1889]: 2025-07-15 04:41:10.796 [INFO][5149] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.198/26] IPv6=[] ContainerID="36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" HandleID="k8s-pod-network.36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--95btq-eth0" Jul 15 04:41:10.852635 containerd[1889]: 2025-07-15 04:41:10.809 [INFO][5125] cni-plugin/k8s.go 418: Populated endpoint ContainerID="36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" Namespace="calico-apiserver" Pod="calico-apiserver-65866b6cfd-95btq" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--95btq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--95btq-eth0", GenerateName:"calico-apiserver-65866b6cfd-", Namespace:"calico-apiserver", SelfLink:"", UID:"d891310b-e597-4d2a-831e-002c63441df3", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65866b6cfd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-9104e8bf1a", ContainerID:"", Pod:"calico-apiserver-65866b6cfd-95btq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9a946bc9ce2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:41:10.852635 containerd[1889]: 2025-07-15 04:41:10.809 [INFO][5125] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.198/32] ContainerID="36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" Namespace="calico-apiserver" Pod="calico-apiserver-65866b6cfd-95btq" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--95btq-eth0" Jul 15 04:41:10.852635 containerd[1889]: 2025-07-15 04:41:10.809 [INFO][5125] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9a946bc9ce2 ContainerID="36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" Namespace="calico-apiserver" Pod="calico-apiserver-65866b6cfd-95btq" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--95btq-eth0" Jul 15 04:41:10.852635 containerd[1889]: 2025-07-15 04:41:10.822 [INFO][5125] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" Namespace="calico-apiserver" Pod="calico-apiserver-65866b6cfd-95btq" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--95btq-eth0" Jul 15 04:41:10.852635 containerd[1889]: 2025-07-15 04:41:10.823 [INFO][5125] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" Namespace="calico-apiserver" Pod="calico-apiserver-65866b6cfd-95btq" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--95btq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--95btq-eth0", GenerateName:"calico-apiserver-65866b6cfd-", Namespace:"calico-apiserver", SelfLink:"", UID:"d891310b-e597-4d2a-831e-002c63441df3", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65866b6cfd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-9104e8bf1a", ContainerID:"36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b", Pod:"calico-apiserver-65866b6cfd-95btq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9a946bc9ce2", MAC:"16:fa:6f:6b:d0:fb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:41:10.852635 containerd[1889]: 2025-07-15 04:41:10.845 [INFO][5125] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" Namespace="calico-apiserver" Pod="calico-apiserver-65866b6cfd-95btq" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--95btq-eth0" Jul 15 04:41:10.881008 systemd[1]: Started cri-containerd-9da2e51d27d33650978815eb4d36d8e642dcafefe4427b36b968e13e09151815.scope - libcontainer container 9da2e51d27d33650978815eb4d36d8e642dcafefe4427b36b968e13e09151815. Jul 15 04:41:10.950038 containerd[1889]: time="2025-07-15T04:41:10.949976229Z" level=info msg="connecting to shim 36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" address="unix:///run/containerd/s/98aa70d056e7f7dbf7aa93e34fd5c500ab4397e49effa19b6c4d84ab23c18c33" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:41:11.011897 systemd[1]: Started cri-containerd-36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b.scope - libcontainer container 36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b. Jul 15 04:41:11.088544 containerd[1889]: time="2025-07-15T04:41:11.088506858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f76686786-25zvj,Uid:6a8e1578-4086-49fa-9074-cde2d003aafe,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9da2e51d27d33650978815eb4d36d8e642dcafefe4427b36b968e13e09151815\"" Jul 15 04:41:11.092881 containerd[1889]: time="2025-07-15T04:41:11.092817670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65866b6cfd-95btq,Uid:d891310b-e597-4d2a-831e-002c63441df3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b\"" Jul 15 04:41:11.233698 containerd[1889]: time="2025-07-15T04:41:11.233097599Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:11.237089 containerd[1889]: time="2025-07-15T04:41:11.237056151Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 15 04:41:11.244109 containerd[1889]: time="2025-07-15T04:41:11.244077169Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:11.251478 containerd[1889]: time="2025-07-15T04:41:11.251444391Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:11.252101 containerd[1889]: time="2025-07-15T04:41:11.252072749Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 2.501518188s" Jul 15 04:41:11.252276 containerd[1889]: time="2025-07-15T04:41:11.252180537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 15 04:41:11.253211 containerd[1889]: time="2025-07-15T04:41:11.253184131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 04:41:11.263364 containerd[1889]: time="2025-07-15T04:41:11.263310880Z" level=info msg="CreateContainer within sandbox \"560788fc9a5e3cd7dbbab08390c5fba19427169c36c742a6474dd7bf2a883c59\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 15 04:41:11.296774 containerd[1889]: time="2025-07-15T04:41:11.296444997Z" level=info msg="Container 1e00f28fc1e67a02a612acff9db1ec1a3df4620ba302182ed860aaa48f6c8256: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:41:11.334815 containerd[1889]: time="2025-07-15T04:41:11.334772974Z" level=info msg="CreateContainer within sandbox \"560788fc9a5e3cd7dbbab08390c5fba19427169c36c742a6474dd7bf2a883c59\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1e00f28fc1e67a02a612acff9db1ec1a3df4620ba302182ed860aaa48f6c8256\"" Jul 15 04:41:11.335809 containerd[1889]: time="2025-07-15T04:41:11.335781705Z" level=info msg="StartContainer for \"1e00f28fc1e67a02a612acff9db1ec1a3df4620ba302182ed860aaa48f6c8256\"" Jul 15 04:41:11.336935 containerd[1889]: time="2025-07-15T04:41:11.336905095Z" level=info msg="connecting to shim 1e00f28fc1e67a02a612acff9db1ec1a3df4620ba302182ed860aaa48f6c8256" address="unix:///run/containerd/s/7f1ebbb8438f0af1e2506841b797294f5a00f141fd53b638a5d5fc01b1a1e9cf" protocol=ttrpc version=3 Jul 15 04:41:11.354876 systemd[1]: Started cri-containerd-1e00f28fc1e67a02a612acff9db1ec1a3df4620ba302182ed860aaa48f6c8256.scope - libcontainer container 1e00f28fc1e67a02a612acff9db1ec1a3df4620ba302182ed860aaa48f6c8256. Jul 15 04:41:11.392777 containerd[1889]: time="2025-07-15T04:41:11.392738835Z" level=info msg="StartContainer for \"1e00f28fc1e67a02a612acff9db1ec1a3df4620ba302182ed860aaa48f6c8256\" returns successfully" Jul 15 04:41:11.462009 systemd-networkd[1479]: cali6a9f6b84a00: Gained IPv6LL Jul 15 04:41:11.473534 containerd[1889]: time="2025-07-15T04:41:11.473488777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65866b6cfd-dcrr6,Uid:5f5712fa-77a5-4c75-abb7-89d823859b2f,Namespace:calico-apiserver,Attempt:0,}" Jul 15 04:41:11.600394 systemd-networkd[1479]: calicb8b40b7980: Link UP Jul 15 04:41:11.601921 systemd-networkd[1479]: calicb8b40b7980: Gained carrier Jul 15 04:41:11.620152 containerd[1889]: 2025-07-15 04:41:11.530 [INFO][5316] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--dcrr6-eth0 calico-apiserver-65866b6cfd- calico-apiserver 5f5712fa-77a5-4c75-abb7-89d823859b2f 815 0 2025-07-15 04:40:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65866b6cfd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4396.0.0-n-9104e8bf1a calico-apiserver-65866b6cfd-dcrr6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicb8b40b7980 [] [] }} ContainerID="83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" Namespace="calico-apiserver" Pod="calico-apiserver-65866b6cfd-dcrr6" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--dcrr6-" Jul 15 04:41:11.620152 containerd[1889]: 2025-07-15 04:41:11.530 [INFO][5316] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" Namespace="calico-apiserver" Pod="calico-apiserver-65866b6cfd-dcrr6" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--dcrr6-eth0" Jul 15 04:41:11.620152 containerd[1889]: 2025-07-15 04:41:11.554 [INFO][5327] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" HandleID="k8s-pod-network.83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--dcrr6-eth0" Jul 15 04:41:11.620152 containerd[1889]: 2025-07-15 04:41:11.554 [INFO][5327] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" HandleID="k8s-pod-network.83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--dcrr6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b610), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4396.0.0-n-9104e8bf1a", "pod":"calico-apiserver-65866b6cfd-dcrr6", "timestamp":"2025-07-15 04:41:11.554832531 +0000 UTC"}, Hostname:"ci-4396.0.0-n-9104e8bf1a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:41:11.620152 containerd[1889]: 2025-07-15 04:41:11.555 [INFO][5327] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:41:11.620152 containerd[1889]: 2025-07-15 04:41:11.555 [INFO][5327] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:41:11.620152 containerd[1889]: 2025-07-15 04:41:11.555 [INFO][5327] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4396.0.0-n-9104e8bf1a' Jul 15 04:41:11.620152 containerd[1889]: 2025-07-15 04:41:11.560 [INFO][5327] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:11.620152 containerd[1889]: 2025-07-15 04:41:11.564 [INFO][5327] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:11.620152 containerd[1889]: 2025-07-15 04:41:11.569 [INFO][5327] ipam/ipam.go 511: Trying affinity for 192.168.32.192/26 host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:11.620152 containerd[1889]: 2025-07-15 04:41:11.571 [INFO][5327] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.192/26 host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:11.620152 containerd[1889]: 2025-07-15 04:41:11.574 [INFO][5327] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.192/26 host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:11.620152 containerd[1889]: 2025-07-15 04:41:11.574 [INFO][5327] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.32.192/26 handle="k8s-pod-network.83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:11.620152 containerd[1889]: 2025-07-15 04:41:11.576 [INFO][5327] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173 Jul 15 04:41:11.620152 containerd[1889]: 2025-07-15 04:41:11.582 [INFO][5327] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.32.192/26 handle="k8s-pod-network.83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:11.620152 containerd[1889]: 2025-07-15 04:41:11.593 [INFO][5327] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.32.199/26] block=192.168.32.192/26 handle="k8s-pod-network.83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:11.620152 containerd[1889]: 2025-07-15 04:41:11.593 [INFO][5327] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.199/26] handle="k8s-pod-network.83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:11.620152 containerd[1889]: 2025-07-15 04:41:11.593 [INFO][5327] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:41:11.620152 containerd[1889]: 2025-07-15 04:41:11.593 [INFO][5327] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.199/26] IPv6=[] ContainerID="83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" HandleID="k8s-pod-network.83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--dcrr6-eth0" Jul 15 04:41:11.621411 containerd[1889]: 2025-07-15 04:41:11.595 [INFO][5316] cni-plugin/k8s.go 418: Populated endpoint ContainerID="83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" Namespace="calico-apiserver" Pod="calico-apiserver-65866b6cfd-dcrr6" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--dcrr6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--dcrr6-eth0", GenerateName:"calico-apiserver-65866b6cfd-", Namespace:"calico-apiserver", SelfLink:"", UID:"5f5712fa-77a5-4c75-abb7-89d823859b2f", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65866b6cfd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-9104e8bf1a", ContainerID:"", Pod:"calico-apiserver-65866b6cfd-dcrr6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicb8b40b7980", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:41:11.621411 containerd[1889]: 2025-07-15 04:41:11.596 [INFO][5316] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.199/32] ContainerID="83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" Namespace="calico-apiserver" Pod="calico-apiserver-65866b6cfd-dcrr6" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--dcrr6-eth0" Jul 15 04:41:11.621411 containerd[1889]: 2025-07-15 04:41:11.596 [INFO][5316] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicb8b40b7980 ContainerID="83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" Namespace="calico-apiserver" Pod="calico-apiserver-65866b6cfd-dcrr6" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--dcrr6-eth0" Jul 15 04:41:11.621411 containerd[1889]: 2025-07-15 04:41:11.602 [INFO][5316] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" Namespace="calico-apiserver" Pod="calico-apiserver-65866b6cfd-dcrr6" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--dcrr6-eth0" Jul 15 04:41:11.621411 containerd[1889]: 2025-07-15 04:41:11.603 [INFO][5316] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" Namespace="calico-apiserver" Pod="calico-apiserver-65866b6cfd-dcrr6" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--dcrr6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--dcrr6-eth0", GenerateName:"calico-apiserver-65866b6cfd-", Namespace:"calico-apiserver", SelfLink:"", UID:"5f5712fa-77a5-4c75-abb7-89d823859b2f", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65866b6cfd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-9104e8bf1a", ContainerID:"83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173", Pod:"calico-apiserver-65866b6cfd-dcrr6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicb8b40b7980", MAC:"1a:33:c2:7b:39:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:41:11.621411 containerd[1889]: 2025-07-15 04:41:11.616 [INFO][5316] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" Namespace="calico-apiserver" Pod="calico-apiserver-65866b6cfd-dcrr6" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--dcrr6-eth0" Jul 15 04:41:11.671403 kubelet[3397]: I0715 04:41:11.670598 3397 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-576b877669-tvxw7" podStartSLOduration=23.167500725 podStartE2EDuration="25.670580095s" podCreationTimestamp="2025-07-15 04:40:46 +0000 UTC" firstStartedPulling="2025-07-15 04:41:08.749971284 +0000 UTC m=+40.333172527" lastFinishedPulling="2025-07-15 04:41:11.253050646 +0000 UTC m=+42.836251897" observedRunningTime="2025-07-15 04:41:11.670281213 +0000 UTC m=+43.253482504" watchObservedRunningTime="2025-07-15 04:41:11.670580095 +0000 UTC m=+43.253781346" Jul 15 04:41:11.691260 containerd[1889]: time="2025-07-15T04:41:11.691222078Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1e00f28fc1e67a02a612acff9db1ec1a3df4620ba302182ed860aaa48f6c8256\" id:\"c26c74efb95be99716ee00bd8ce20d44baa53cdef48c96744f6965a4228faa8b\" pid:5357 exited_at:{seconds:1752554471 nanos:690749614}" Jul 15 04:41:11.716962 containerd[1889]: time="2025-07-15T04:41:11.716863185Z" level=info msg="connecting to shim 83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" address="unix:///run/containerd/s/3e4f0c4680dd3d79eed95f8c573fdd1cecd268964f42510d4e452935f95fc055" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:41:11.740869 systemd[1]: Started cri-containerd-83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173.scope - libcontainer container 83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173. Jul 15 04:41:11.899253 containerd[1889]: time="2025-07-15T04:41:11.899125729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65866b6cfd-dcrr6,Uid:5f5712fa-77a5-4c75-abb7-89d823859b2f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173\"" Jul 15 04:41:12.474293 containerd[1889]: time="2025-07-15T04:41:12.473694339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mrdg2,Uid:5ed20801-e92d-42b2-94d6-5d7666efeedc,Namespace:calico-system,Attempt:0,}" Jul 15 04:41:12.575478 systemd-networkd[1479]: calib3249b5c166: Link UP Jul 15 04:41:12.576317 systemd-networkd[1479]: calib3249b5c166: Gained carrier Jul 15 04:41:12.603877 containerd[1889]: 2025-07-15 04:41:12.509 [INFO][5412] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4396.0.0--n--9104e8bf1a-k8s-csi--node--driver--mrdg2-eth0 csi-node-driver- calico-system 5ed20801-e92d-42b2-94d6-5d7666efeedc 698 0 2025-07-15 04:40:45 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4396.0.0-n-9104e8bf1a csi-node-driver-mrdg2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib3249b5c166 [] [] }} ContainerID="6d105984153cfbf1538c980ef27417502c4a3e7097da3b2d49f5a0b56f67f3d2" Namespace="calico-system" Pod="csi-node-driver-mrdg2" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-csi--node--driver--mrdg2-" Jul 15 04:41:12.603877 containerd[1889]: 2025-07-15 04:41:12.509 [INFO][5412] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6d105984153cfbf1538c980ef27417502c4a3e7097da3b2d49f5a0b56f67f3d2" Namespace="calico-system" Pod="csi-node-driver-mrdg2" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-csi--node--driver--mrdg2-eth0" Jul 15 04:41:12.603877 containerd[1889]: 2025-07-15 04:41:12.528 [INFO][5424] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d105984153cfbf1538c980ef27417502c4a3e7097da3b2d49f5a0b56f67f3d2" HandleID="k8s-pod-network.6d105984153cfbf1538c980ef27417502c4a3e7097da3b2d49f5a0b56f67f3d2" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-csi--node--driver--mrdg2-eth0" Jul 15 04:41:12.603877 containerd[1889]: 2025-07-15 04:41:12.529 [INFO][5424] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6d105984153cfbf1538c980ef27417502c4a3e7097da3b2d49f5a0b56f67f3d2" HandleID="k8s-pod-network.6d105984153cfbf1538c980ef27417502c4a3e7097da3b2d49f5a0b56f67f3d2" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-csi--node--driver--mrdg2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afa0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4396.0.0-n-9104e8bf1a", "pod":"csi-node-driver-mrdg2", "timestamp":"2025-07-15 04:41:12.528793421 +0000 UTC"}, Hostname:"ci-4396.0.0-n-9104e8bf1a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:41:12.603877 containerd[1889]: 2025-07-15 04:41:12.529 [INFO][5424] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:41:12.603877 containerd[1889]: 2025-07-15 04:41:12.529 [INFO][5424] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:41:12.603877 containerd[1889]: 2025-07-15 04:41:12.529 [INFO][5424] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4396.0.0-n-9104e8bf1a' Jul 15 04:41:12.603877 containerd[1889]: 2025-07-15 04:41:12.534 [INFO][5424] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6d105984153cfbf1538c980ef27417502c4a3e7097da3b2d49f5a0b56f67f3d2" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:12.603877 containerd[1889]: 2025-07-15 04:41:12.538 [INFO][5424] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:12.603877 containerd[1889]: 2025-07-15 04:41:12.541 [INFO][5424] ipam/ipam.go 511: Trying affinity for 192.168.32.192/26 host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:12.603877 containerd[1889]: 2025-07-15 04:41:12.543 [INFO][5424] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.192/26 host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:12.603877 containerd[1889]: 2025-07-15 04:41:12.545 [INFO][5424] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.192/26 host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:12.603877 containerd[1889]: 2025-07-15 04:41:12.545 [INFO][5424] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.32.192/26 handle="k8s-pod-network.6d105984153cfbf1538c980ef27417502c4a3e7097da3b2d49f5a0b56f67f3d2" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:12.603877 containerd[1889]: 2025-07-15 04:41:12.546 [INFO][5424] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6d105984153cfbf1538c980ef27417502c4a3e7097da3b2d49f5a0b56f67f3d2 Jul 15 04:41:12.603877 containerd[1889]: 2025-07-15 04:41:12.557 [INFO][5424] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.32.192/26 handle="k8s-pod-network.6d105984153cfbf1538c980ef27417502c4a3e7097da3b2d49f5a0b56f67f3d2" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:12.603877 containerd[1889]: 2025-07-15 04:41:12.568 [INFO][5424] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.32.200/26] block=192.168.32.192/26 handle="k8s-pod-network.6d105984153cfbf1538c980ef27417502c4a3e7097da3b2d49f5a0b56f67f3d2" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:12.603877 containerd[1889]: 2025-07-15 04:41:12.568 [INFO][5424] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.200/26] handle="k8s-pod-network.6d105984153cfbf1538c980ef27417502c4a3e7097da3b2d49f5a0b56f67f3d2" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:12.603877 containerd[1889]: 2025-07-15 04:41:12.568 [INFO][5424] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:41:12.603877 containerd[1889]: 2025-07-15 04:41:12.568 [INFO][5424] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.200/26] IPv6=[] ContainerID="6d105984153cfbf1538c980ef27417502c4a3e7097da3b2d49f5a0b56f67f3d2" HandleID="k8s-pod-network.6d105984153cfbf1538c980ef27417502c4a3e7097da3b2d49f5a0b56f67f3d2" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-csi--node--driver--mrdg2-eth0" Jul 15 04:41:12.605402 containerd[1889]: 2025-07-15 04:41:12.571 [INFO][5412] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6d105984153cfbf1538c980ef27417502c4a3e7097da3b2d49f5a0b56f67f3d2" Namespace="calico-system" Pod="csi-node-driver-mrdg2" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-csi--node--driver--mrdg2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--9104e8bf1a-k8s-csi--node--driver--mrdg2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5ed20801-e92d-42b2-94d6-5d7666efeedc", ResourceVersion:"698", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-9104e8bf1a", ContainerID:"", Pod:"csi-node-driver-mrdg2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.32.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib3249b5c166", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:41:12.605402 containerd[1889]: 2025-07-15 04:41:12.571 [INFO][5412] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.200/32] ContainerID="6d105984153cfbf1538c980ef27417502c4a3e7097da3b2d49f5a0b56f67f3d2" Namespace="calico-system" Pod="csi-node-driver-mrdg2" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-csi--node--driver--mrdg2-eth0" Jul 15 04:41:12.605402 containerd[1889]: 2025-07-15 04:41:12.571 [INFO][5412] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib3249b5c166 ContainerID="6d105984153cfbf1538c980ef27417502c4a3e7097da3b2d49f5a0b56f67f3d2" Namespace="calico-system" Pod="csi-node-driver-mrdg2" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-csi--node--driver--mrdg2-eth0" Jul 15 04:41:12.605402 containerd[1889]: 2025-07-15 04:41:12.577 [INFO][5412] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d105984153cfbf1538c980ef27417502c4a3e7097da3b2d49f5a0b56f67f3d2" Namespace="calico-system" Pod="csi-node-driver-mrdg2" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-csi--node--driver--mrdg2-eth0" Jul 15 04:41:12.605402 containerd[1889]: 2025-07-15 04:41:12.578 [INFO][5412] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6d105984153cfbf1538c980ef27417502c4a3e7097da3b2d49f5a0b56f67f3d2" Namespace="calico-system" Pod="csi-node-driver-mrdg2" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-csi--node--driver--mrdg2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--9104e8bf1a-k8s-csi--node--driver--mrdg2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5ed20801-e92d-42b2-94d6-5d7666efeedc", ResourceVersion:"698", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-9104e8bf1a", ContainerID:"6d105984153cfbf1538c980ef27417502c4a3e7097da3b2d49f5a0b56f67f3d2", Pod:"csi-node-driver-mrdg2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.32.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib3249b5c166", MAC:"86:6e:10:db:8b:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:41:12.605402 containerd[1889]: 2025-07-15 04:41:12.595 [INFO][5412] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6d105984153cfbf1538c980ef27417502c4a3e7097da3b2d49f5a0b56f67f3d2" Namespace="calico-system" Pod="csi-node-driver-mrdg2" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-csi--node--driver--mrdg2-eth0" Jul 15 04:41:12.613002 systemd-networkd[1479]: calia57822b61d8: Gained IPv6LL Jul 15 04:41:12.740945 systemd-networkd[1479]: cali9a946bc9ce2: Gained IPv6LL Jul 15 04:41:12.868906 systemd-networkd[1479]: calicb8b40b7980: Gained IPv6LL Jul 15 04:41:12.899622 containerd[1889]: time="2025-07-15T04:41:12.899314986Z" level=info msg="connecting to shim 6d105984153cfbf1538c980ef27417502c4a3e7097da3b2d49f5a0b56f67f3d2" address="unix:///run/containerd/s/94baccb592603965d605d3d0d0c3ac578235a40f2d6fef102411fa2a3307992e" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:41:12.920869 systemd[1]: Started cri-containerd-6d105984153cfbf1538c980ef27417502c4a3e7097da3b2d49f5a0b56f67f3d2.scope - libcontainer container 6d105984153cfbf1538c980ef27417502c4a3e7097da3b2d49f5a0b56f67f3d2. Jul 15 04:41:12.946603 containerd[1889]: time="2025-07-15T04:41:12.946546054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mrdg2,Uid:5ed20801-e92d-42b2-94d6-5d7666efeedc,Namespace:calico-system,Attempt:0,} returns sandbox id \"6d105984153cfbf1538c980ef27417502c4a3e7097da3b2d49f5a0b56f67f3d2\"" Jul 15 04:41:13.309310 kubelet[3397]: I0715 04:41:13.309233 3397 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 04:41:13.370601 containerd[1889]: time="2025-07-15T04:41:13.370397560Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c0d25990b4de3a820bc3720ff72d4f48f3cca443e9425deb22155493046d0ee\" id:\"610f0644869c30dffc80c954f4f601c632579c28031fb2dd79013bcae8fc3683\" pid:5497 exited_at:{seconds:1752554473 nanos:369585164}" Jul 15 04:41:13.446710 containerd[1889]: time="2025-07-15T04:41:13.446593161Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c0d25990b4de3a820bc3720ff72d4f48f3cca443e9425deb22155493046d0ee\" id:\"439bb360afef8c39a20b46324d49baac21d6a40d1a0532336c26f2f61d3108a7\" pid:5521 exited_at:{seconds:1752554473 nanos:446167330}" Jul 15 04:41:13.472987 containerd[1889]: time="2025-07-15T04:41:13.472944996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-fghwk,Uid:38bed32a-910d-4564-a20c-a0e4fa024f2c,Namespace:calico-system,Attempt:0,}" Jul 15 04:41:13.571091 systemd-networkd[1479]: cali63a582148d0: Link UP Jul 15 04:41:13.571886 systemd-networkd[1479]: cali63a582148d0: Gained carrier Jul 15 04:41:13.585389 containerd[1889]: 2025-07-15 04:41:13.508 [INFO][5532] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4396.0.0--n--9104e8bf1a-k8s-goldmane--58fd7646b9--fghwk-eth0 goldmane-58fd7646b9- calico-system 38bed32a-910d-4564-a20c-a0e4fa024f2c 808 0 2025-07-15 04:40:45 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4396.0.0-n-9104e8bf1a goldmane-58fd7646b9-fghwk eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali63a582148d0 [] [] }} ContainerID="966bcf42a084d14617194d707a53f6448a631e060bfb926baa304b49b06dc69c" Namespace="calico-system" Pod="goldmane-58fd7646b9-fghwk" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-goldmane--58fd7646b9--fghwk-" Jul 15 04:41:13.585389 containerd[1889]: 2025-07-15 04:41:13.509 [INFO][5532] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="966bcf42a084d14617194d707a53f6448a631e060bfb926baa304b49b06dc69c" Namespace="calico-system" Pod="goldmane-58fd7646b9-fghwk" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-goldmane--58fd7646b9--fghwk-eth0" Jul 15 04:41:13.585389 containerd[1889]: 2025-07-15 04:41:13.527 [INFO][5545] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="966bcf42a084d14617194d707a53f6448a631e060bfb926baa304b49b06dc69c" HandleID="k8s-pod-network.966bcf42a084d14617194d707a53f6448a631e060bfb926baa304b49b06dc69c" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-goldmane--58fd7646b9--fghwk-eth0" Jul 15 04:41:13.585389 containerd[1889]: 2025-07-15 04:41:13.527 [INFO][5545] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="966bcf42a084d14617194d707a53f6448a631e060bfb926baa304b49b06dc69c" HandleID="k8s-pod-network.966bcf42a084d14617194d707a53f6448a631e060bfb926baa304b49b06dc69c" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-goldmane--58fd7646b9--fghwk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b730), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4396.0.0-n-9104e8bf1a", "pod":"goldmane-58fd7646b9-fghwk", "timestamp":"2025-07-15 04:41:13.52745053 +0000 UTC"}, Hostname:"ci-4396.0.0-n-9104e8bf1a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:41:13.585389 containerd[1889]: 2025-07-15 04:41:13.527 [INFO][5545] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:41:13.585389 containerd[1889]: 2025-07-15 04:41:13.527 [INFO][5545] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:41:13.585389 containerd[1889]: 2025-07-15 04:41:13.527 [INFO][5545] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4396.0.0-n-9104e8bf1a' Jul 15 04:41:13.585389 containerd[1889]: 2025-07-15 04:41:13.534 [INFO][5545] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.966bcf42a084d14617194d707a53f6448a631e060bfb926baa304b49b06dc69c" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:13.585389 containerd[1889]: 2025-07-15 04:41:13.538 [INFO][5545] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:13.585389 containerd[1889]: 2025-07-15 04:41:13.542 [INFO][5545] ipam/ipam.go 511: Trying affinity for 192.168.32.192/26 host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:13.585389 containerd[1889]: 2025-07-15 04:41:13.544 [INFO][5545] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.192/26 host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:13.585389 containerd[1889]: 2025-07-15 04:41:13.546 [INFO][5545] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.192/26 host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:13.585389 containerd[1889]: 2025-07-15 04:41:13.546 [INFO][5545] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.32.192/26 handle="k8s-pod-network.966bcf42a084d14617194d707a53f6448a631e060bfb926baa304b49b06dc69c" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:13.585389 containerd[1889]: 2025-07-15 04:41:13.548 [INFO][5545] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.966bcf42a084d14617194d707a53f6448a631e060bfb926baa304b49b06dc69c Jul 15 04:41:13.585389 containerd[1889]: 2025-07-15 04:41:13.552 [INFO][5545] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.32.192/26 handle="k8s-pod-network.966bcf42a084d14617194d707a53f6448a631e060bfb926baa304b49b06dc69c" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:13.585389 containerd[1889]: 2025-07-15 04:41:13.564 [INFO][5545] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.32.201/26] block=192.168.32.192/26 handle="k8s-pod-network.966bcf42a084d14617194d707a53f6448a631e060bfb926baa304b49b06dc69c" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:13.585389 containerd[1889]: 2025-07-15 04:41:13.564 [INFO][5545] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.201/26] handle="k8s-pod-network.966bcf42a084d14617194d707a53f6448a631e060bfb926baa304b49b06dc69c" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:13.585389 containerd[1889]: 2025-07-15 04:41:13.564 [INFO][5545] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:41:13.585389 containerd[1889]: 2025-07-15 04:41:13.564 [INFO][5545] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.201/26] IPv6=[] ContainerID="966bcf42a084d14617194d707a53f6448a631e060bfb926baa304b49b06dc69c" HandleID="k8s-pod-network.966bcf42a084d14617194d707a53f6448a631e060bfb926baa304b49b06dc69c" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-goldmane--58fd7646b9--fghwk-eth0" Jul 15 04:41:13.587953 containerd[1889]: 2025-07-15 04:41:13.565 [INFO][5532] cni-plugin/k8s.go 418: Populated endpoint ContainerID="966bcf42a084d14617194d707a53f6448a631e060bfb926baa304b49b06dc69c" Namespace="calico-system" Pod="goldmane-58fd7646b9-fghwk" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-goldmane--58fd7646b9--fghwk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--9104e8bf1a-k8s-goldmane--58fd7646b9--fghwk-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"38bed32a-910d-4564-a20c-a0e4fa024f2c", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-9104e8bf1a", ContainerID:"", Pod:"goldmane-58fd7646b9-fghwk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.32.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali63a582148d0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:41:13.587953 containerd[1889]: 2025-07-15 04:41:13.565 [INFO][5532] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.201/32] ContainerID="966bcf42a084d14617194d707a53f6448a631e060bfb926baa304b49b06dc69c" Namespace="calico-system" Pod="goldmane-58fd7646b9-fghwk" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-goldmane--58fd7646b9--fghwk-eth0" Jul 15 04:41:13.587953 containerd[1889]: 2025-07-15 04:41:13.565 [INFO][5532] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali63a582148d0 ContainerID="966bcf42a084d14617194d707a53f6448a631e060bfb926baa304b49b06dc69c" Namespace="calico-system" Pod="goldmane-58fd7646b9-fghwk" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-goldmane--58fd7646b9--fghwk-eth0" Jul 15 04:41:13.587953 containerd[1889]: 2025-07-15 04:41:13.569 [INFO][5532] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="966bcf42a084d14617194d707a53f6448a631e060bfb926baa304b49b06dc69c" Namespace="calico-system" Pod="goldmane-58fd7646b9-fghwk" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-goldmane--58fd7646b9--fghwk-eth0" Jul 15 04:41:13.587953 containerd[1889]: 2025-07-15 04:41:13.569 [INFO][5532] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="966bcf42a084d14617194d707a53f6448a631e060bfb926baa304b49b06dc69c" Namespace="calico-system" Pod="goldmane-58fd7646b9-fghwk" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-goldmane--58fd7646b9--fghwk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--9104e8bf1a-k8s-goldmane--58fd7646b9--fghwk-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"38bed32a-910d-4564-a20c-a0e4fa024f2c", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-9104e8bf1a", ContainerID:"966bcf42a084d14617194d707a53f6448a631e060bfb926baa304b49b06dc69c", Pod:"goldmane-58fd7646b9-fghwk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.32.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali63a582148d0", MAC:"f2:53:7b:85:b8:fd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:41:13.587953 containerd[1889]: 2025-07-15 04:41:13.582 [INFO][5532] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="966bcf42a084d14617194d707a53f6448a631e060bfb926baa304b49b06dc69c" Namespace="calico-system" Pod="goldmane-58fd7646b9-fghwk" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-goldmane--58fd7646b9--fghwk-eth0" Jul 15 04:41:13.661697 containerd[1889]: time="2025-07-15T04:41:13.661570007Z" level=info msg="connecting to shim 966bcf42a084d14617194d707a53f6448a631e060bfb926baa304b49b06dc69c" address="unix:///run/containerd/s/06cc5e8fef9c292ca4bbf229530ce0c7fc7de812b4178e3ff1622250865846c5" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:41:13.688862 systemd[1]: Started cri-containerd-966bcf42a084d14617194d707a53f6448a631e060bfb926baa304b49b06dc69c.scope - libcontainer container 966bcf42a084d14617194d707a53f6448a631e060bfb926baa304b49b06dc69c. Jul 15 04:41:13.829046 systemd-networkd[1479]: calib3249b5c166: Gained IPv6LL Jul 15 04:41:13.893352 containerd[1889]: time="2025-07-15T04:41:13.893304496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-fghwk,Uid:38bed32a-910d-4564-a20c-a0e4fa024f2c,Namespace:calico-system,Attempt:0,} returns sandbox id \"966bcf42a084d14617194d707a53f6448a631e060bfb926baa304b49b06dc69c\"" Jul 15 04:41:14.581021 containerd[1889]: time="2025-07-15T04:41:14.580961272Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:14.585818 containerd[1889]: time="2025-07-15T04:41:14.585758012Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 15 04:41:14.589769 containerd[1889]: time="2025-07-15T04:41:14.589698246Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:14.595847 containerd[1889]: time="2025-07-15T04:41:14.595777442Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:14.596187 containerd[1889]: time="2025-07-15T04:41:14.596158214Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 3.342947738s" Jul 15 04:41:14.596187 containerd[1889]: time="2025-07-15T04:41:14.596187855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 15 04:41:14.597984 containerd[1889]: time="2025-07-15T04:41:14.597960310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 04:41:14.599487 containerd[1889]: time="2025-07-15T04:41:14.599462452Z" level=info msg="CreateContainer within sandbox \"9da2e51d27d33650978815eb4d36d8e642dcafefe4427b36b968e13e09151815\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 04:41:14.640205 containerd[1889]: time="2025-07-15T04:41:14.639472065Z" level=info msg="Container d60c7b075bb0a89b900ecf4ef5544d29f5b553ed1bb535b01bbb43a4554f17df: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:41:14.671681 containerd[1889]: time="2025-07-15T04:41:14.671631068Z" level=info msg="CreateContainer within sandbox \"9da2e51d27d33650978815eb4d36d8e642dcafefe4427b36b968e13e09151815\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d60c7b075bb0a89b900ecf4ef5544d29f5b553ed1bb535b01bbb43a4554f17df\"" Jul 15 04:41:14.672247 containerd[1889]: time="2025-07-15T04:41:14.672210862Z" level=info msg="StartContainer for \"d60c7b075bb0a89b900ecf4ef5544d29f5b553ed1bb535b01bbb43a4554f17df\"" Jul 15 04:41:14.674423 containerd[1889]: time="2025-07-15T04:41:14.674393353Z" level=info msg="connecting to shim d60c7b075bb0a89b900ecf4ef5544d29f5b553ed1bb535b01bbb43a4554f17df" address="unix:///run/containerd/s/45164c4f51b1b3fc99e52699423fe92581880d50b56dec2d2c2edee1c16c516e" protocol=ttrpc version=3 Jul 15 04:41:14.695868 systemd[1]: Started cri-containerd-d60c7b075bb0a89b900ecf4ef5544d29f5b553ed1bb535b01bbb43a4554f17df.scope - libcontainer container d60c7b075bb0a89b900ecf4ef5544d29f5b553ed1bb535b01bbb43a4554f17df. Jul 15 04:41:14.737501 containerd[1889]: time="2025-07-15T04:41:14.737461079Z" level=info msg="StartContainer for \"d60c7b075bb0a89b900ecf4ef5544d29f5b553ed1bb535b01bbb43a4554f17df\" returns successfully" Jul 15 04:41:14.788870 systemd-networkd[1479]: cali63a582148d0: Gained IPv6LL Jul 15 04:41:14.977573 containerd[1889]: time="2025-07-15T04:41:14.977053304Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:14.981319 containerd[1889]: time="2025-07-15T04:41:14.981283387Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 15 04:41:14.982700 containerd[1889]: time="2025-07-15T04:41:14.982663037Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 384.676847ms" Jul 15 04:41:14.982700 containerd[1889]: time="2025-07-15T04:41:14.982699670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 15 04:41:14.983894 containerd[1889]: time="2025-07-15T04:41:14.983868987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 04:41:14.986628 containerd[1889]: time="2025-07-15T04:41:14.986600991Z" level=info msg="CreateContainer within sandbox \"36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 04:41:15.040281 containerd[1889]: time="2025-07-15T04:41:15.039653431Z" level=info msg="Container 371afd796d1782c626caaf69ae6438a1ace03d0066c2ad365ee613fcd8e4f432: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:41:15.063799 containerd[1889]: time="2025-07-15T04:41:15.063752417Z" level=info msg="CreateContainer within sandbox \"36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"371afd796d1782c626caaf69ae6438a1ace03d0066c2ad365ee613fcd8e4f432\"" Jul 15 04:41:15.064842 containerd[1889]: time="2025-07-15T04:41:15.064811985Z" level=info msg="StartContainer for \"371afd796d1782c626caaf69ae6438a1ace03d0066c2ad365ee613fcd8e4f432\"" Jul 15 04:41:15.066834 containerd[1889]: time="2025-07-15T04:41:15.066786799Z" level=info msg="connecting to shim 371afd796d1782c626caaf69ae6438a1ace03d0066c2ad365ee613fcd8e4f432" address="unix:///run/containerd/s/98aa70d056e7f7dbf7aa93e34fd5c500ab4397e49effa19b6c4d84ab23c18c33" protocol=ttrpc version=3 Jul 15 04:41:15.090864 systemd[1]: Started cri-containerd-371afd796d1782c626caaf69ae6438a1ace03d0066c2ad365ee613fcd8e4f432.scope - libcontainer container 371afd796d1782c626caaf69ae6438a1ace03d0066c2ad365ee613fcd8e4f432. Jul 15 04:41:15.130901 containerd[1889]: time="2025-07-15T04:41:15.130480824Z" level=info msg="StartContainer for \"371afd796d1782c626caaf69ae6438a1ace03d0066c2ad365ee613fcd8e4f432\" returns successfully" Jul 15 04:41:15.366265 containerd[1889]: time="2025-07-15T04:41:15.366220666Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:15.377502 containerd[1889]: time="2025-07-15T04:41:15.377461685Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 15 04:41:15.378319 containerd[1889]: time="2025-07-15T04:41:15.378286951Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 394.386188ms" Jul 15 04:41:15.378357 containerd[1889]: time="2025-07-15T04:41:15.378324712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 15 04:41:15.380998 containerd[1889]: time="2025-07-15T04:41:15.380964985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 15 04:41:15.382723 containerd[1889]: time="2025-07-15T04:41:15.382687143Z" level=info msg="CreateContainer within sandbox \"83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 04:41:15.443637 containerd[1889]: time="2025-07-15T04:41:15.443587514Z" level=info msg="Container 0bf9bcf960c5183d5f3800da9f3c31c68bd2bdb7c0e4b137ab968cb3dcd043b4: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:41:15.467138 containerd[1889]: time="2025-07-15T04:41:15.467092553Z" level=info msg="CreateContainer within sandbox \"83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0bf9bcf960c5183d5f3800da9f3c31c68bd2bdb7c0e4b137ab968cb3dcd043b4\"" Jul 15 04:41:15.469196 containerd[1889]: time="2025-07-15T04:41:15.469168793Z" level=info msg="StartContainer for \"0bf9bcf960c5183d5f3800da9f3c31c68bd2bdb7c0e4b137ab968cb3dcd043b4\"" Jul 15 04:41:15.470454 containerd[1889]: time="2025-07-15T04:41:15.470422840Z" level=info msg="connecting to shim 0bf9bcf960c5183d5f3800da9f3c31c68bd2bdb7c0e4b137ab968cb3dcd043b4" address="unix:///run/containerd/s/3e4f0c4680dd3d79eed95f8c573fdd1cecd268964f42510d4e452935f95fc055" protocol=ttrpc version=3 Jul 15 04:41:15.501090 systemd[1]: Started cri-containerd-0bf9bcf960c5183d5f3800da9f3c31c68bd2bdb7c0e4b137ab968cb3dcd043b4.scope - libcontainer container 0bf9bcf960c5183d5f3800da9f3c31c68bd2bdb7c0e4b137ab968cb3dcd043b4. Jul 15 04:41:15.561503 containerd[1889]: time="2025-07-15T04:41:15.561325683Z" level=info msg="StartContainer for \"0bf9bcf960c5183d5f3800da9f3c31c68bd2bdb7c0e4b137ab968cb3dcd043b4\" returns successfully" Jul 15 04:41:15.727480 kubelet[3397]: I0715 04:41:15.726823 3397 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f76686786-25zvj" podStartSLOduration=27.218504892 podStartE2EDuration="30.726804255s" podCreationTimestamp="2025-07-15 04:40:45 +0000 UTC" firstStartedPulling="2025-07-15 04:41:11.089881721 +0000 UTC m=+42.673082964" lastFinishedPulling="2025-07-15 04:41:14.598181036 +0000 UTC m=+46.181382327" observedRunningTime="2025-07-15 04:41:15.705012726 +0000 UTC m=+47.288213969" watchObservedRunningTime="2025-07-15 04:41:15.726804255 +0000 UTC m=+47.310005498" Jul 15 04:41:15.752952 kubelet[3397]: I0715 04:41:15.752880 3397 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-65866b6cfd-95btq" podStartSLOduration=28.863682672 podStartE2EDuration="32.752862901s" podCreationTimestamp="2025-07-15 04:40:43 +0000 UTC" firstStartedPulling="2025-07-15 04:41:11.094326538 +0000 UTC m=+42.677527781" lastFinishedPulling="2025-07-15 04:41:14.983506767 +0000 UTC m=+46.566708010" observedRunningTime="2025-07-15 04:41:15.75268676 +0000 UTC m=+47.335888003" watchObservedRunningTime="2025-07-15 04:41:15.752862901 +0000 UTC m=+47.336064144" Jul 15 04:41:15.753136 kubelet[3397]: I0715 04:41:15.753042 3397 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-65866b6cfd-dcrr6" podStartSLOduration=29.273107897 podStartE2EDuration="32.753037499s" podCreationTimestamp="2025-07-15 04:40:43 +0000 UTC" firstStartedPulling="2025-07-15 04:41:11.900275504 +0000 UTC m=+43.483476755" lastFinishedPulling="2025-07-15 04:41:15.380205114 +0000 UTC m=+46.963406357" observedRunningTime="2025-07-15 04:41:15.727587928 +0000 UTC m=+47.310789171" watchObservedRunningTime="2025-07-15 04:41:15.753037499 +0000 UTC m=+47.336238742" Jul 15 04:41:16.689120 kubelet[3397]: I0715 04:41:16.689082 3397 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 04:41:17.609421 containerd[1889]: time="2025-07-15T04:41:17.609358051Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:17.614064 containerd[1889]: time="2025-07-15T04:41:17.613877007Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 15 04:41:17.620376 containerd[1889]: time="2025-07-15T04:41:17.620320990Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:17.627116 containerd[1889]: time="2025-07-15T04:41:17.627070439Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:17.627634 containerd[1889]: time="2025-07-15T04:41:17.627604071Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 2.246612349s" Jul 15 04:41:17.627743 containerd[1889]: time="2025-07-15T04:41:17.627729355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 15 04:41:17.629502 containerd[1889]: time="2025-07-15T04:41:17.629476433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 15 04:41:17.633823 containerd[1889]: time="2025-07-15T04:41:17.633279423Z" level=info msg="CreateContainer within sandbox \"6d105984153cfbf1538c980ef27417502c4a3e7097da3b2d49f5a0b56f67f3d2\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 15 04:41:17.677915 containerd[1889]: time="2025-07-15T04:41:17.677873298Z" level=info msg="Container 0e84c2d88f09776ec2d4f4060353671ae3e2f3caa699a650b36bc38d543ccc76: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:41:17.679752 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4011546130.mount: Deactivated successfully. Jul 15 04:41:17.708140 containerd[1889]: time="2025-07-15T04:41:17.708081520Z" level=info msg="CreateContainer within sandbox \"6d105984153cfbf1538c980ef27417502c4a3e7097da3b2d49f5a0b56f67f3d2\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0e84c2d88f09776ec2d4f4060353671ae3e2f3caa699a650b36bc38d543ccc76\"" Jul 15 04:41:17.709221 containerd[1889]: time="2025-07-15T04:41:17.708948443Z" level=info msg="StartContainer for \"0e84c2d88f09776ec2d4f4060353671ae3e2f3caa699a650b36bc38d543ccc76\"" Jul 15 04:41:17.712231 containerd[1889]: time="2025-07-15T04:41:17.712195279Z" level=info msg="connecting to shim 0e84c2d88f09776ec2d4f4060353671ae3e2f3caa699a650b36bc38d543ccc76" address="unix:///run/containerd/s/94baccb592603965d605d3d0d0c3ac578235a40f2d6fef102411fa2a3307992e" protocol=ttrpc version=3 Jul 15 04:41:17.780174 systemd[1]: Started cri-containerd-0e84c2d88f09776ec2d4f4060353671ae3e2f3caa699a650b36bc38d543ccc76.scope - libcontainer container 0e84c2d88f09776ec2d4f4060353671ae3e2f3caa699a650b36bc38d543ccc76. Jul 15 04:41:17.836691 containerd[1889]: time="2025-07-15T04:41:17.836633711Z" level=info msg="StartContainer for \"0e84c2d88f09776ec2d4f4060353671ae3e2f3caa699a650b36bc38d543ccc76\" returns successfully" Jul 15 04:41:20.656368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2368429675.mount: Deactivated successfully. Jul 15 04:41:21.380759 containerd[1889]: time="2025-07-15T04:41:21.380536532Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:21.390175 containerd[1889]: time="2025-07-15T04:41:21.389992439Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 15 04:41:21.405210 containerd[1889]: time="2025-07-15T04:41:21.404978798Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:21.417205 containerd[1889]: time="2025-07-15T04:41:21.417144509Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:21.417881 containerd[1889]: time="2025-07-15T04:41:21.417848725Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 3.788347003s" Jul 15 04:41:21.417988 containerd[1889]: time="2025-07-15T04:41:21.417972241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 15 04:41:21.423836 containerd[1889]: time="2025-07-15T04:41:21.423463533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 15 04:41:21.430944 containerd[1889]: time="2025-07-15T04:41:21.430904162Z" level=info msg="CreateContainer within sandbox \"966bcf42a084d14617194d707a53f6448a631e060bfb926baa304b49b06dc69c\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 15 04:41:21.472974 containerd[1889]: time="2025-07-15T04:41:21.472328887Z" level=info msg="Container 7bf28934529e444fcdce0afdf24f2e16f63a1d190ae4d565d264f23d079a413e: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:41:21.502150 containerd[1889]: time="2025-07-15T04:41:21.502101439Z" level=info msg="CreateContainer within sandbox \"966bcf42a084d14617194d707a53f6448a631e060bfb926baa304b49b06dc69c\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"7bf28934529e444fcdce0afdf24f2e16f63a1d190ae4d565d264f23d079a413e\"" Jul 15 04:41:21.503486 containerd[1889]: time="2025-07-15T04:41:21.503458693Z" level=info msg="StartContainer for \"7bf28934529e444fcdce0afdf24f2e16f63a1d190ae4d565d264f23d079a413e\"" Jul 15 04:41:21.505647 containerd[1889]: time="2025-07-15T04:41:21.505597238Z" level=info msg="connecting to shim 7bf28934529e444fcdce0afdf24f2e16f63a1d190ae4d565d264f23d079a413e" address="unix:///run/containerd/s/06cc5e8fef9c292ca4bbf229530ce0c7fc7de812b4178e3ff1622250865846c5" protocol=ttrpc version=3 Jul 15 04:41:21.533897 systemd[1]: Started cri-containerd-7bf28934529e444fcdce0afdf24f2e16f63a1d190ae4d565d264f23d079a413e.scope - libcontainer container 7bf28934529e444fcdce0afdf24f2e16f63a1d190ae4d565d264f23d079a413e. Jul 15 04:41:21.577388 containerd[1889]: time="2025-07-15T04:41:21.577321500Z" level=info msg="StartContainer for \"7bf28934529e444fcdce0afdf24f2e16f63a1d190ae4d565d264f23d079a413e\" returns successfully" Jul 15 04:41:21.739117 kubelet[3397]: I0715 04:41:21.738875 3397 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-fghwk" podStartSLOduration=29.210874308 podStartE2EDuration="36.738838017s" podCreationTimestamp="2025-07-15 04:40:45 +0000 UTC" firstStartedPulling="2025-07-15 04:41:13.894548559 +0000 UTC m=+45.477749802" lastFinishedPulling="2025-07-15 04:41:21.422512268 +0000 UTC m=+53.005713511" observedRunningTime="2025-07-15 04:41:21.736788859 +0000 UTC m=+53.319990110" watchObservedRunningTime="2025-07-15 04:41:21.738838017 +0000 UTC m=+53.322039260" Jul 15 04:41:21.809344 containerd[1889]: time="2025-07-15T04:41:21.809284027Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7bf28934529e444fcdce0afdf24f2e16f63a1d190ae4d565d264f23d079a413e\" id:\"b88fc0106548402a4b658216fd5f931b5c8824d753e6edd9664beb2777e629f2\" pid:5832 exit_status:1 exited_at:{seconds:1752554481 nanos:808943000}" Jul 15 04:41:22.769833 containerd[1889]: time="2025-07-15T04:41:22.769783701Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7bf28934529e444fcdce0afdf24f2e16f63a1d190ae4d565d264f23d079a413e\" id:\"71fb7e875b73a0db9b75ba517b904cdd9e79d94897132c9a782b2c01d7dc3da8\" pid:5856 exit_status:1 exited_at:{seconds:1752554482 nanos:769356760}" Jul 15 04:41:23.601121 containerd[1889]: time="2025-07-15T04:41:23.601056907Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:23.605318 containerd[1889]: time="2025-07-15T04:41:23.605175756Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 15 04:41:23.611185 containerd[1889]: time="2025-07-15T04:41:23.611147081Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:23.619136 containerd[1889]: time="2025-07-15T04:41:23.619072227Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:23.619739 containerd[1889]: time="2025-07-15T04:41:23.619495920Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 2.195750826s" Jul 15 04:41:23.619739 containerd[1889]: time="2025-07-15T04:41:23.619533065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 15 04:41:23.625510 containerd[1889]: time="2025-07-15T04:41:23.625483093Z" level=info msg="CreateContainer within sandbox \"6d105984153cfbf1538c980ef27417502c4a3e7097da3b2d49f5a0b56f67f3d2\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 15 04:41:23.664669 containerd[1889]: time="2025-07-15T04:41:23.664625631Z" level=info msg="Container 4507cfe10a56996da92486ccbfd8d46698aa1023760ab04bea7e1bfd4effefc3: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:41:23.695397 containerd[1889]: time="2025-07-15T04:41:23.695338415Z" level=info msg="CreateContainer within sandbox \"6d105984153cfbf1538c980ef27417502c4a3e7097da3b2d49f5a0b56f67f3d2\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4507cfe10a56996da92486ccbfd8d46698aa1023760ab04bea7e1bfd4effefc3\"" Jul 15 04:41:23.696796 containerd[1889]: time="2025-07-15T04:41:23.696773116Z" level=info msg="StartContainer for \"4507cfe10a56996da92486ccbfd8d46698aa1023760ab04bea7e1bfd4effefc3\"" Jul 15 04:41:23.697917 containerd[1889]: time="2025-07-15T04:41:23.697883079Z" level=info msg="connecting to shim 4507cfe10a56996da92486ccbfd8d46698aa1023760ab04bea7e1bfd4effefc3" address="unix:///run/containerd/s/94baccb592603965d605d3d0d0c3ac578235a40f2d6fef102411fa2a3307992e" protocol=ttrpc version=3 Jul 15 04:41:23.717972 systemd[1]: Started cri-containerd-4507cfe10a56996da92486ccbfd8d46698aa1023760ab04bea7e1bfd4effefc3.scope - libcontainer container 4507cfe10a56996da92486ccbfd8d46698aa1023760ab04bea7e1bfd4effefc3. Jul 15 04:41:23.782799 containerd[1889]: time="2025-07-15T04:41:23.782486346Z" level=info msg="StartContainer for \"4507cfe10a56996da92486ccbfd8d46698aa1023760ab04bea7e1bfd4effefc3\" returns successfully" Jul 15 04:41:23.814282 containerd[1889]: time="2025-07-15T04:41:23.814093919Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7bf28934529e444fcdce0afdf24f2e16f63a1d190ae4d565d264f23d079a413e\" id:\"a547b7875bb2926213bf6877487e60e222ea6cc57df01734e5fc45402dd11b97\" pid:5904 exit_status:1 exited_at:{seconds:1752554483 nanos:813806030}" Jul 15 04:41:24.573067 kubelet[3397]: I0715 04:41:24.572969 3397 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 15 04:41:24.575839 kubelet[3397]: I0715 04:41:24.575736 3397 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 15 04:41:31.017503 containerd[1889]: time="2025-07-15T04:41:31.017461721Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7bf28934529e444fcdce0afdf24f2e16f63a1d190ae4d565d264f23d079a413e\" id:\"12f8fd04bcdbf23f4ad08e4eae45aeaf71c53318f0dcdf9a567a07285eb57274\" pid:5950 exited_at:{seconds:1752554491 nanos:17002682}" Jul 15 04:41:31.036656 kubelet[3397]: I0715 04:41:31.036592 3397 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-mrdg2" podStartSLOduration=35.364296071 podStartE2EDuration="46.036577304s" podCreationTimestamp="2025-07-15 04:40:45 +0000 UTC" firstStartedPulling="2025-07-15 04:41:12.948008104 +0000 UTC m=+44.531209347" lastFinishedPulling="2025-07-15 04:41:23.620289337 +0000 UTC m=+55.203490580" observedRunningTime="2025-07-15 04:41:24.744866134 +0000 UTC m=+56.328067481" watchObservedRunningTime="2025-07-15 04:41:31.036577304 +0000 UTC m=+62.619778555" Jul 15 04:41:31.199284 containerd[1889]: time="2025-07-15T04:41:31.199233790Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1e00f28fc1e67a02a612acff9db1ec1a3df4620ba302182ed860aaa48f6c8256\" id:\"37ba53eaf99fbc916f5a318926500442d7b646ffe37bb38ee17e7ed0afe9f041\" pid:5974 exited_at:{seconds:1752554491 nanos:197612659}" Jul 15 04:41:37.869444 containerd[1889]: time="2025-07-15T04:41:37.869388451Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1e00f28fc1e67a02a612acff9db1ec1a3df4620ba302182ed860aaa48f6c8256\" id:\"92c15b266615c1d3905b010be98750deb6f56e3df919d032ed4ddbac701a0961\" pid:6004 exited_at:{seconds:1752554497 nanos:869171300}" Jul 15 04:41:43.386366 containerd[1889]: time="2025-07-15T04:41:43.386323811Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c0d25990b4de3a820bc3720ff72d4f48f3cca443e9425deb22155493046d0ee\" id:\"88e215baa5cdf1ee4911bd84b6dfcde9375492c69cefb3fbfbf7fb881ca56229\" pid:6026 exited_at:{seconds:1752554503 nanos:385952039}" Jul 15 04:41:50.544102 kubelet[3397]: I0715 04:41:50.543834 3397 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 04:41:50.619324 containerd[1889]: time="2025-07-15T04:41:50.619261879Z" level=info msg="StopContainer for \"0bf9bcf960c5183d5f3800da9f3c31c68bd2bdb7c0e4b137ab968cb3dcd043b4\" with timeout 30 (s)" Jul 15 04:41:50.628229 containerd[1889]: time="2025-07-15T04:41:50.628194488Z" level=info msg="Stop container \"0bf9bcf960c5183d5f3800da9f3c31c68bd2bdb7c0e4b137ab968cb3dcd043b4\" with signal terminated" Jul 15 04:41:50.657457 systemd[1]: cri-containerd-0bf9bcf960c5183d5f3800da9f3c31c68bd2bdb7c0e4b137ab968cb3dcd043b4.scope: Deactivated successfully. Jul 15 04:41:50.658101 systemd[1]: cri-containerd-0bf9bcf960c5183d5f3800da9f3c31c68bd2bdb7c0e4b137ab968cb3dcd043b4.scope: Consumed 1.054s CPU time, 46M memory peak. Jul 15 04:41:50.661679 containerd[1889]: time="2025-07-15T04:41:50.661636672Z" level=info msg="received exit event container_id:\"0bf9bcf960c5183d5f3800da9f3c31c68bd2bdb7c0e4b137ab968cb3dcd043b4\" id:\"0bf9bcf960c5183d5f3800da9f3c31c68bd2bdb7c0e4b137ab968cb3dcd043b4\" pid:5701 exit_status:1 exited_at:{seconds:1752554510 nanos:660478614}" Jul 15 04:41:50.662599 containerd[1889]: time="2025-07-15T04:41:50.662573676Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0bf9bcf960c5183d5f3800da9f3c31c68bd2bdb7c0e4b137ab968cb3dcd043b4\" id:\"0bf9bcf960c5183d5f3800da9f3c31c68bd2bdb7c0e4b137ab968cb3dcd043b4\" pid:5701 exit_status:1 exited_at:{seconds:1752554510 nanos:660478614}" Jul 15 04:41:50.679920 systemd[1]: Created slice kubepods-besteffort-pod94476aba_027d_4e30_ae50_c33fae8b4f46.slice - libcontainer container kubepods-besteffort-pod94476aba_027d_4e30_ae50_c33fae8b4f46.slice. Jul 15 04:41:50.699488 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0bf9bcf960c5183d5f3800da9f3c31c68bd2bdb7c0e4b137ab968cb3dcd043b4-rootfs.mount: Deactivated successfully. Jul 15 04:41:50.774758 kubelet[3397]: I0715 04:41:50.774628 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/94476aba-027d-4e30-ae50-c33fae8b4f46-calico-apiserver-certs\") pod \"calico-apiserver-f76686786-mb5xv\" (UID: \"94476aba-027d-4e30-ae50-c33fae8b4f46\") " pod="calico-apiserver/calico-apiserver-f76686786-mb5xv" Jul 15 04:41:50.774758 kubelet[3397]: I0715 04:41:50.774674 3397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dsxk\" (UniqueName: \"kubernetes.io/projected/94476aba-027d-4e30-ae50-c33fae8b4f46-kube-api-access-7dsxk\") pod \"calico-apiserver-f76686786-mb5xv\" (UID: \"94476aba-027d-4e30-ae50-c33fae8b4f46\") " pod="calico-apiserver/calico-apiserver-f76686786-mb5xv" Jul 15 04:41:50.840193 containerd[1889]: time="2025-07-15T04:41:50.840059941Z" level=info msg="StopContainer for \"0bf9bcf960c5183d5f3800da9f3c31c68bd2bdb7c0e4b137ab968cb3dcd043b4\" returns successfully" Jul 15 04:41:50.841014 containerd[1889]: time="2025-07-15T04:41:50.840974768Z" level=info msg="StopPodSandbox for \"83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173\"" Jul 15 04:41:50.841093 containerd[1889]: time="2025-07-15T04:41:50.841040618Z" level=info msg="Container to stop \"0bf9bcf960c5183d5f3800da9f3c31c68bd2bdb7c0e4b137ab968cb3dcd043b4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 04:41:50.849270 systemd[1]: cri-containerd-83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173.scope: Deactivated successfully. Jul 15 04:41:50.857252 containerd[1889]: time="2025-07-15T04:41:50.857213714Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173\" id:\"83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173\" pid:5397 exit_status:137 exited_at:{seconds:1752554510 nanos:856911401}" Jul 15 04:41:50.898363 containerd[1889]: time="2025-07-15T04:41:50.898323485Z" level=info msg="shim disconnected" id=83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173 namespace=k8s.io Jul 15 04:41:50.898491 containerd[1889]: time="2025-07-15T04:41:50.898356422Z" level=warning msg="cleaning up after shim disconnected" id=83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173 namespace=k8s.io Jul 15 04:41:50.898491 containerd[1889]: time="2025-07-15T04:41:50.898384191Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 15 04:41:50.900164 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173-rootfs.mount: Deactivated successfully. Jul 15 04:41:50.972154 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173-shm.mount: Deactivated successfully. Jul 15 04:41:50.984959 containerd[1889]: time="2025-07-15T04:41:50.984834891Z" level=info msg="received exit event sandbox_id:\"83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173\" exit_status:137 exited_at:{seconds:1752554510 nanos:856911401}" Jul 15 04:41:50.987137 containerd[1889]: time="2025-07-15T04:41:50.987098103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f76686786-mb5xv,Uid:94476aba-027d-4e30-ae50-c33fae8b4f46,Namespace:calico-apiserver,Attempt:0,}" Jul 15 04:41:51.045411 systemd-networkd[1479]: calicb8b40b7980: Link DOWN Jul 15 04:41:51.045784 systemd-networkd[1479]: calicb8b40b7980: Lost carrier Jul 15 04:41:51.155151 containerd[1889]: 2025-07-15 04:41:51.043 [INFO][6120] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" Jul 15 04:41:51.155151 containerd[1889]: 2025-07-15 04:41:51.043 [INFO][6120] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" iface="eth0" netns="/var/run/netns/cni-d2acfc64-6cb9-1aa3-7d28-53e0932a6fea" Jul 15 04:41:51.155151 containerd[1889]: 2025-07-15 04:41:51.044 [INFO][6120] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" iface="eth0" netns="/var/run/netns/cni-d2acfc64-6cb9-1aa3-7d28-53e0932a6fea" Jul 15 04:41:51.155151 containerd[1889]: 2025-07-15 04:41:51.052 [INFO][6120] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" after=9.220425ms iface="eth0" netns="/var/run/netns/cni-d2acfc64-6cb9-1aa3-7d28-53e0932a6fea" Jul 15 04:41:51.155151 containerd[1889]: 2025-07-15 04:41:51.052 [INFO][6120] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" Jul 15 04:41:51.155151 containerd[1889]: 2025-07-15 04:41:51.052 [INFO][6120] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" Jul 15 04:41:51.155151 containerd[1889]: 2025-07-15 04:41:51.093 [INFO][6139] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" HandleID="k8s-pod-network.83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--dcrr6-eth0" Jul 15 04:41:51.155151 containerd[1889]: 2025-07-15 04:41:51.097 [INFO][6139] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:41:51.155151 containerd[1889]: 2025-07-15 04:41:51.097 [INFO][6139] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:41:51.155151 containerd[1889]: 2025-07-15 04:41:51.149 [INFO][6139] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" HandleID="k8s-pod-network.83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--dcrr6-eth0" Jul 15 04:41:51.155151 containerd[1889]: 2025-07-15 04:41:51.149 [INFO][6139] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" HandleID="k8s-pod-network.83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--dcrr6-eth0" Jul 15 04:41:51.155151 containerd[1889]: 2025-07-15 04:41:51.151 [INFO][6139] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:41:51.155151 containerd[1889]: 2025-07-15 04:41:51.153 [INFO][6120] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" Jul 15 04:41:51.155923 containerd[1889]: time="2025-07-15T04:41:51.155885014Z" level=info msg="TearDown network for sandbox \"83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173\" successfully" Jul 15 04:41:51.155923 containerd[1889]: time="2025-07-15T04:41:51.155920615Z" level=info msg="StopPodSandbox for \"83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173\" returns successfully" Jul 15 04:41:51.222405 systemd-networkd[1479]: cali5824d325967: Link UP Jul 15 04:41:51.222795 systemd-networkd[1479]: cali5824d325967: Gained carrier Jul 15 04:41:51.249309 containerd[1889]: 2025-07-15 04:41:51.054 [INFO][6127] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--f76686786--mb5xv-eth0 calico-apiserver-f76686786- calico-apiserver 94476aba-027d-4e30-ae50-c33fae8b4f46 1150 0 2025-07-15 04:41:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f76686786 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4396.0.0-n-9104e8bf1a calico-apiserver-f76686786-mb5xv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5824d325967 [] [] }} ContainerID="4dc4e1d230049efbbaa5b4c1488289367578ab8160e80976143d7be37318d93c" Namespace="calico-apiserver" Pod="calico-apiserver-f76686786-mb5xv" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--f76686786--mb5xv-" Jul 15 04:41:51.249309 containerd[1889]: 2025-07-15 04:41:51.054 [INFO][6127] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4dc4e1d230049efbbaa5b4c1488289367578ab8160e80976143d7be37318d93c" Namespace="calico-apiserver" Pod="calico-apiserver-f76686786-mb5xv" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--f76686786--mb5xv-eth0" Jul 15 04:41:51.249309 containerd[1889]: 2025-07-15 04:41:51.102 [INFO][6148] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4dc4e1d230049efbbaa5b4c1488289367578ab8160e80976143d7be37318d93c" HandleID="k8s-pod-network.4dc4e1d230049efbbaa5b4c1488289367578ab8160e80976143d7be37318d93c" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--f76686786--mb5xv-eth0" Jul 15 04:41:51.249309 containerd[1889]: 2025-07-15 04:41:51.103 [INFO][6148] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4dc4e1d230049efbbaa5b4c1488289367578ab8160e80976143d7be37318d93c" HandleID="k8s-pod-network.4dc4e1d230049efbbaa5b4c1488289367578ab8160e80976143d7be37318d93c" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--f76686786--mb5xv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3010), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4396.0.0-n-9104e8bf1a", "pod":"calico-apiserver-f76686786-mb5xv", "timestamp":"2025-07-15 04:41:51.102938491 +0000 UTC"}, Hostname:"ci-4396.0.0-n-9104e8bf1a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:41:51.249309 containerd[1889]: 2025-07-15 04:41:51.103 [INFO][6148] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:41:51.249309 containerd[1889]: 2025-07-15 04:41:51.151 [INFO][6148] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:41:51.249309 containerd[1889]: 2025-07-15 04:41:51.151 [INFO][6148] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4396.0.0-n-9104e8bf1a' Jul 15 04:41:51.249309 containerd[1889]: 2025-07-15 04:41:51.165 [INFO][6148] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4dc4e1d230049efbbaa5b4c1488289367578ab8160e80976143d7be37318d93c" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:51.249309 containerd[1889]: 2025-07-15 04:41:51.171 [INFO][6148] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:51.249309 containerd[1889]: 2025-07-15 04:41:51.185 [INFO][6148] ipam/ipam.go 511: Trying affinity for 192.168.32.192/26 host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:51.249309 containerd[1889]: 2025-07-15 04:41:51.195 [INFO][6148] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.192/26 host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:51.249309 containerd[1889]: 2025-07-15 04:41:51.197 [INFO][6148] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.192/26 host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:51.249309 containerd[1889]: 2025-07-15 04:41:51.197 [INFO][6148] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.32.192/26 handle="k8s-pod-network.4dc4e1d230049efbbaa5b4c1488289367578ab8160e80976143d7be37318d93c" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:51.249309 containerd[1889]: 2025-07-15 04:41:51.198 [INFO][6148] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4dc4e1d230049efbbaa5b4c1488289367578ab8160e80976143d7be37318d93c Jul 15 04:41:51.249309 containerd[1889]: 2025-07-15 04:41:51.203 [INFO][6148] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.32.192/26 handle="k8s-pod-network.4dc4e1d230049efbbaa5b4c1488289367578ab8160e80976143d7be37318d93c" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:51.249309 containerd[1889]: 2025-07-15 04:41:51.215 [INFO][6148] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.32.202/26] block=192.168.32.192/26 handle="k8s-pod-network.4dc4e1d230049efbbaa5b4c1488289367578ab8160e80976143d7be37318d93c" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:51.249309 containerd[1889]: 2025-07-15 04:41:51.215 [INFO][6148] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.202/26] handle="k8s-pod-network.4dc4e1d230049efbbaa5b4c1488289367578ab8160e80976143d7be37318d93c" host="ci-4396.0.0-n-9104e8bf1a" Jul 15 04:41:51.249309 containerd[1889]: 2025-07-15 04:41:51.215 [INFO][6148] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:41:51.249309 containerd[1889]: 2025-07-15 04:41:51.215 [INFO][6148] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.202/26] IPv6=[] ContainerID="4dc4e1d230049efbbaa5b4c1488289367578ab8160e80976143d7be37318d93c" HandleID="k8s-pod-network.4dc4e1d230049efbbaa5b4c1488289367578ab8160e80976143d7be37318d93c" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--f76686786--mb5xv-eth0" Jul 15 04:41:51.250231 containerd[1889]: 2025-07-15 04:41:51.218 [INFO][6127] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4dc4e1d230049efbbaa5b4c1488289367578ab8160e80976143d7be37318d93c" Namespace="calico-apiserver" Pod="calico-apiserver-f76686786-mb5xv" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--f76686786--mb5xv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--f76686786--mb5xv-eth0", GenerateName:"calico-apiserver-f76686786-", Namespace:"calico-apiserver", SelfLink:"", UID:"94476aba-027d-4e30-ae50-c33fae8b4f46", ResourceVersion:"1150", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 41, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f76686786", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-9104e8bf1a", ContainerID:"", Pod:"calico-apiserver-f76686786-mb5xv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.202/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5824d325967", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:41:51.250231 containerd[1889]: 2025-07-15 04:41:51.218 [INFO][6127] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.202/32] ContainerID="4dc4e1d230049efbbaa5b4c1488289367578ab8160e80976143d7be37318d93c" Namespace="calico-apiserver" Pod="calico-apiserver-f76686786-mb5xv" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--f76686786--mb5xv-eth0" Jul 15 04:41:51.250231 containerd[1889]: 2025-07-15 04:41:51.218 [INFO][6127] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5824d325967 ContainerID="4dc4e1d230049efbbaa5b4c1488289367578ab8160e80976143d7be37318d93c" Namespace="calico-apiserver" Pod="calico-apiserver-f76686786-mb5xv" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--f76686786--mb5xv-eth0" Jul 15 04:41:51.250231 containerd[1889]: 2025-07-15 04:41:51.223 [INFO][6127] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4dc4e1d230049efbbaa5b4c1488289367578ab8160e80976143d7be37318d93c" Namespace="calico-apiserver" Pod="calico-apiserver-f76686786-mb5xv" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--f76686786--mb5xv-eth0" Jul 15 04:41:51.250231 containerd[1889]: 2025-07-15 04:41:51.227 [INFO][6127] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4dc4e1d230049efbbaa5b4c1488289367578ab8160e80976143d7be37318d93c" Namespace="calico-apiserver" Pod="calico-apiserver-f76686786-mb5xv" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--f76686786--mb5xv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--f76686786--mb5xv-eth0", GenerateName:"calico-apiserver-f76686786-", Namespace:"calico-apiserver", SelfLink:"", UID:"94476aba-027d-4e30-ae50-c33fae8b4f46", ResourceVersion:"1150", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 41, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f76686786", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4396.0.0-n-9104e8bf1a", ContainerID:"4dc4e1d230049efbbaa5b4c1488289367578ab8160e80976143d7be37318d93c", Pod:"calico-apiserver-f76686786-mb5xv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.202/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5824d325967", MAC:"8a:ce:5e:12:f2:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:41:51.250231 containerd[1889]: 2025-07-15 04:41:51.246 [INFO][6127] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4dc4e1d230049efbbaa5b4c1488289367578ab8160e80976143d7be37318d93c" Namespace="calico-apiserver" Pod="calico-apiserver-f76686786-mb5xv" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--f76686786--mb5xv-eth0" Jul 15 04:41:51.279353 kubelet[3397]: I0715 04:41:51.279276 3397 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkqfv\" (UniqueName: \"kubernetes.io/projected/5f5712fa-77a5-4c75-abb7-89d823859b2f-kube-api-access-dkqfv\") pod \"5f5712fa-77a5-4c75-abb7-89d823859b2f\" (UID: \"5f5712fa-77a5-4c75-abb7-89d823859b2f\") " Jul 15 04:41:51.279353 kubelet[3397]: I0715 04:41:51.279327 3397 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5f5712fa-77a5-4c75-abb7-89d823859b2f-calico-apiserver-certs\") pod \"5f5712fa-77a5-4c75-abb7-89d823859b2f\" (UID: \"5f5712fa-77a5-4c75-abb7-89d823859b2f\") " Jul 15 04:41:51.281523 kubelet[3397]: I0715 04:41:51.281479 3397 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f5712fa-77a5-4c75-abb7-89d823859b2f-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "5f5712fa-77a5-4c75-abb7-89d823859b2f" (UID: "5f5712fa-77a5-4c75-abb7-89d823859b2f"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 15 04:41:51.282970 kubelet[3397]: I0715 04:41:51.282936 3397 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f5712fa-77a5-4c75-abb7-89d823859b2f-kube-api-access-dkqfv" (OuterVolumeSpecName: "kube-api-access-dkqfv") pod "5f5712fa-77a5-4c75-abb7-89d823859b2f" (UID: "5f5712fa-77a5-4c75-abb7-89d823859b2f"). InnerVolumeSpecName "kube-api-access-dkqfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 15 04:41:51.317546 containerd[1889]: time="2025-07-15T04:41:51.316149112Z" level=info msg="connecting to shim 4dc4e1d230049efbbaa5b4c1488289367578ab8160e80976143d7be37318d93c" address="unix:///run/containerd/s/8e2c9141ec1262cd4ea7c474eceaed791c1409c1b3ddd1758bc4bb7a52384bd2" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:41:51.333896 systemd[1]: Started cri-containerd-4dc4e1d230049efbbaa5b4c1488289367578ab8160e80976143d7be37318d93c.scope - libcontainer container 4dc4e1d230049efbbaa5b4c1488289367578ab8160e80976143d7be37318d93c. Jul 15 04:41:51.380904 kubelet[3397]: I0715 04:41:51.380852 3397 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkqfv\" (UniqueName: \"kubernetes.io/projected/5f5712fa-77a5-4c75-abb7-89d823859b2f-kube-api-access-dkqfv\") on node \"ci-4396.0.0-n-9104e8bf1a\" DevicePath \"\"" Jul 15 04:41:51.380904 kubelet[3397]: I0715 04:41:51.380892 3397 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5f5712fa-77a5-4c75-abb7-89d823859b2f-calico-apiserver-certs\") on node \"ci-4396.0.0-n-9104e8bf1a\" DevicePath \"\"" Jul 15 04:41:51.495079 containerd[1889]: time="2025-07-15T04:41:51.495003401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f76686786-mb5xv,Uid:94476aba-027d-4e30-ae50-c33fae8b4f46,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4dc4e1d230049efbbaa5b4c1488289367578ab8160e80976143d7be37318d93c\"" Jul 15 04:41:51.499016 containerd[1889]: time="2025-07-15T04:41:51.498982183Z" level=info msg="CreateContainer within sandbox \"4dc4e1d230049efbbaa5b4c1488289367578ab8160e80976143d7be37318d93c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 04:41:51.533339 containerd[1889]: time="2025-07-15T04:41:51.533287073Z" level=info msg="Container 604dc77d64f7ed55c3bbb4a540a21a0e3538c56cc91caabb4b4ad4bdeebce136: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:41:51.559566 containerd[1889]: time="2025-07-15T04:41:51.559437849Z" level=info msg="CreateContainer within sandbox \"4dc4e1d230049efbbaa5b4c1488289367578ab8160e80976143d7be37318d93c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"604dc77d64f7ed55c3bbb4a540a21a0e3538c56cc91caabb4b4ad4bdeebce136\"" Jul 15 04:41:51.560736 containerd[1889]: time="2025-07-15T04:41:51.560455679Z" level=info msg="StartContainer for \"604dc77d64f7ed55c3bbb4a540a21a0e3538c56cc91caabb4b4ad4bdeebce136\"" Jul 15 04:41:51.562427 containerd[1889]: time="2025-07-15T04:41:51.562399857Z" level=info msg="connecting to shim 604dc77d64f7ed55c3bbb4a540a21a0e3538c56cc91caabb4b4ad4bdeebce136" address="unix:///run/containerd/s/8e2c9141ec1262cd4ea7c474eceaed791c1409c1b3ddd1758bc4bb7a52384bd2" protocol=ttrpc version=3 Jul 15 04:41:51.584984 systemd[1]: Started cri-containerd-604dc77d64f7ed55c3bbb4a540a21a0e3538c56cc91caabb4b4ad4bdeebce136.scope - libcontainer container 604dc77d64f7ed55c3bbb4a540a21a0e3538c56cc91caabb4b4ad4bdeebce136. Jul 15 04:41:51.642218 containerd[1889]: time="2025-07-15T04:41:51.642175303Z" level=info msg="StartContainer for \"604dc77d64f7ed55c3bbb4a540a21a0e3538c56cc91caabb4b4ad4bdeebce136\" returns successfully" Jul 15 04:41:51.706879 systemd[1]: run-netns-cni\x2dd2acfc64\x2d6cb9\x2d1aa3\x2d7d28\x2d53e0932a6fea.mount: Deactivated successfully. Jul 15 04:41:51.706973 systemd[1]: var-lib-kubelet-pods-5f5712fa\x2d77a5\x2d4c75\x2dabb7\x2d89d823859b2f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddkqfv.mount: Deactivated successfully. Jul 15 04:41:51.707022 systemd[1]: var-lib-kubelet-pods-5f5712fa\x2d77a5\x2d4c75\x2dabb7\x2d89d823859b2f-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jul 15 04:41:51.796487 kubelet[3397]: I0715 04:41:51.796314 3397 scope.go:117] "RemoveContainer" containerID="0bf9bcf960c5183d5f3800da9f3c31c68bd2bdb7c0e4b137ab968cb3dcd043b4" Jul 15 04:41:51.803171 containerd[1889]: time="2025-07-15T04:41:51.803121821Z" level=info msg="RemoveContainer for \"0bf9bcf960c5183d5f3800da9f3c31c68bd2bdb7c0e4b137ab968cb3dcd043b4\"" Jul 15 04:41:51.811755 systemd[1]: Removed slice kubepods-besteffort-pod5f5712fa_77a5_4c75_abb7_89d823859b2f.slice - libcontainer container kubepods-besteffort-pod5f5712fa_77a5_4c75_abb7_89d823859b2f.slice. Jul 15 04:41:51.811830 systemd[1]: kubepods-besteffort-pod5f5712fa_77a5_4c75_abb7_89d823859b2f.slice: Consumed 1.070s CPU time, 46.3M memory peak. Jul 15 04:41:51.828431 kubelet[3397]: I0715 04:41:51.828351 3397 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f76686786-mb5xv" podStartSLOduration=1.828334057 podStartE2EDuration="1.828334057s" podCreationTimestamp="2025-07-15 04:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:41:51.827184895 +0000 UTC m=+83.410386170" watchObservedRunningTime="2025-07-15 04:41:51.828334057 +0000 UTC m=+83.411535308" Jul 15 04:41:51.843738 containerd[1889]: time="2025-07-15T04:41:51.841965606Z" level=info msg="RemoveContainer for \"0bf9bcf960c5183d5f3800da9f3c31c68bd2bdb7c0e4b137ab968cb3dcd043b4\" returns successfully" Jul 15 04:41:51.843738 containerd[1889]: time="2025-07-15T04:41:51.843445329Z" level=error msg="ContainerStatus for \"0bf9bcf960c5183d5f3800da9f3c31c68bd2bdb7c0e4b137ab968cb3dcd043b4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0bf9bcf960c5183d5f3800da9f3c31c68bd2bdb7c0e4b137ab968cb3dcd043b4\": not found" Jul 15 04:41:51.843952 kubelet[3397]: I0715 04:41:51.843113 3397 scope.go:117] "RemoveContainer" containerID="0bf9bcf960c5183d5f3800da9f3c31c68bd2bdb7c0e4b137ab968cb3dcd043b4" Jul 15 04:41:51.843952 kubelet[3397]: E0715 04:41:51.843600 3397 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0bf9bcf960c5183d5f3800da9f3c31c68bd2bdb7c0e4b137ab968cb3dcd043b4\": not found" containerID="0bf9bcf960c5183d5f3800da9f3c31c68bd2bdb7c0e4b137ab968cb3dcd043b4" Jul 15 04:41:51.845195 kubelet[3397]: I0715 04:41:51.844427 3397 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0bf9bcf960c5183d5f3800da9f3c31c68bd2bdb7c0e4b137ab968cb3dcd043b4"} err="failed to get container status \"0bf9bcf960c5183d5f3800da9f3c31c68bd2bdb7c0e4b137ab968cb3dcd043b4\": rpc error: code = NotFound desc = an error occurred when try to find container \"0bf9bcf960c5183d5f3800da9f3c31c68bd2bdb7c0e4b137ab968cb3dcd043b4\": not found" Jul 15 04:41:52.474454 kubelet[3397]: I0715 04:41:52.474412 3397 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f5712fa-77a5-4c75-abb7-89d823859b2f" path="/var/lib/kubelet/pods/5f5712fa-77a5-4c75-abb7-89d823859b2f/volumes" Jul 15 04:41:52.677037 systemd-networkd[1479]: cali5824d325967: Gained IPv6LL Jul 15 04:41:52.760747 containerd[1889]: time="2025-07-15T04:41:52.760562919Z" level=info msg="StopContainer for \"371afd796d1782c626caaf69ae6438a1ace03d0066c2ad365ee613fcd8e4f432\" with timeout 30 (s)" Jul 15 04:41:52.762026 containerd[1889]: time="2025-07-15T04:41:52.761845173Z" level=info msg="Stop container \"371afd796d1782c626caaf69ae6438a1ace03d0066c2ad365ee613fcd8e4f432\" with signal terminated" Jul 15 04:41:52.791480 systemd[1]: cri-containerd-371afd796d1782c626caaf69ae6438a1ace03d0066c2ad365ee613fcd8e4f432.scope: Deactivated successfully. Jul 15 04:41:52.797429 containerd[1889]: time="2025-07-15T04:41:52.797383139Z" level=info msg="received exit event container_id:\"371afd796d1782c626caaf69ae6438a1ace03d0066c2ad365ee613fcd8e4f432\" id:\"371afd796d1782c626caaf69ae6438a1ace03d0066c2ad365ee613fcd8e4f432\" pid:5666 exit_status:1 exited_at:{seconds:1752554512 nanos:797106499}" Jul 15 04:41:52.798299 containerd[1889]: time="2025-07-15T04:41:52.798205380Z" level=info msg="TaskExit event in podsandbox handler container_id:\"371afd796d1782c626caaf69ae6438a1ace03d0066c2ad365ee613fcd8e4f432\" id:\"371afd796d1782c626caaf69ae6438a1ace03d0066c2ad365ee613fcd8e4f432\" pid:5666 exit_status:1 exited_at:{seconds:1752554512 nanos:797106499}" Jul 15 04:41:52.834473 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-371afd796d1782c626caaf69ae6438a1ace03d0066c2ad365ee613fcd8e4f432-rootfs.mount: Deactivated successfully. Jul 15 04:41:52.911981 containerd[1889]: time="2025-07-15T04:41:52.911930081Z" level=info msg="StopContainer for \"371afd796d1782c626caaf69ae6438a1ace03d0066c2ad365ee613fcd8e4f432\" returns successfully" Jul 15 04:41:52.912470 containerd[1889]: time="2025-07-15T04:41:52.912380439Z" level=info msg="StopPodSandbox for \"36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b\"" Jul 15 04:41:52.912470 containerd[1889]: time="2025-07-15T04:41:52.912431000Z" level=info msg="Container to stop \"371afd796d1782c626caaf69ae6438a1ace03d0066c2ad365ee613fcd8e4f432\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 04:41:52.923010 systemd[1]: cri-containerd-36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b.scope: Deactivated successfully. Jul 15 04:41:52.924597 containerd[1889]: time="2025-07-15T04:41:52.924534791Z" level=info msg="TaskExit event in podsandbox handler container_id:\"36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b\" id:\"36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b\" pid:5257 exit_status:137 exited_at:{seconds:1752554512 nanos:923854715}" Jul 15 04:41:52.954213 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b-rootfs.mount: Deactivated successfully. Jul 15 04:41:52.957312 containerd[1889]: time="2025-07-15T04:41:52.956975305Z" level=info msg="shim disconnected" id=36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b namespace=k8s.io Jul 15 04:41:52.957312 containerd[1889]: time="2025-07-15T04:41:52.957171695Z" level=warning msg="cleaning up after shim disconnected" id=36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b namespace=k8s.io Jul 15 04:41:52.957312 containerd[1889]: time="2025-07-15T04:41:52.957206968Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 15 04:41:52.990276 containerd[1889]: time="2025-07-15T04:41:52.990218252Z" level=info msg="received exit event sandbox_id:\"36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b\" exit_status:137 exited_at:{seconds:1752554512 nanos:923854715}" Jul 15 04:41:52.994888 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b-shm.mount: Deactivated successfully. Jul 15 04:41:53.040280 systemd-networkd[1479]: cali9a946bc9ce2: Link DOWN Jul 15 04:41:53.040287 systemd-networkd[1479]: cali9a946bc9ce2: Lost carrier Jul 15 04:41:53.163025 containerd[1889]: 2025-07-15 04:41:53.036 [INFO][6325] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" Jul 15 04:41:53.163025 containerd[1889]: 2025-07-15 04:41:53.037 [INFO][6325] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" iface="eth0" netns="/var/run/netns/cni-8c4f8ae8-1b88-b072-b9ce-cda413348780" Jul 15 04:41:53.163025 containerd[1889]: 2025-07-15 04:41:53.037 [INFO][6325] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" iface="eth0" netns="/var/run/netns/cni-8c4f8ae8-1b88-b072-b9ce-cda413348780" Jul 15 04:41:53.163025 containerd[1889]: 2025-07-15 04:41:53.045 [INFO][6325] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" after=8.012582ms iface="eth0" netns="/var/run/netns/cni-8c4f8ae8-1b88-b072-b9ce-cda413348780" Jul 15 04:41:53.163025 containerd[1889]: 2025-07-15 04:41:53.045 [INFO][6325] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" Jul 15 04:41:53.163025 containerd[1889]: 2025-07-15 04:41:53.046 [INFO][6325] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" Jul 15 04:41:53.163025 containerd[1889]: 2025-07-15 04:41:53.080 [INFO][6333] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" HandleID="k8s-pod-network.36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--95btq-eth0" Jul 15 04:41:53.163025 containerd[1889]: 2025-07-15 04:41:53.082 [INFO][6333] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:41:53.163025 containerd[1889]: 2025-07-15 04:41:53.083 [INFO][6333] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:41:53.163025 containerd[1889]: 2025-07-15 04:41:53.157 [INFO][6333] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" HandleID="k8s-pod-network.36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--95btq-eth0" Jul 15 04:41:53.163025 containerd[1889]: 2025-07-15 04:41:53.158 [INFO][6333] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" HandleID="k8s-pod-network.36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--95btq-eth0" Jul 15 04:41:53.163025 containerd[1889]: 2025-07-15 04:41:53.159 [INFO][6333] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:41:53.163025 containerd[1889]: 2025-07-15 04:41:53.160 [INFO][6325] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" Jul 15 04:41:53.166377 systemd[1]: run-netns-cni\x2d8c4f8ae8\x2d1b88\x2db072\x2db9ce\x2dcda413348780.mount: Deactivated successfully. Jul 15 04:41:53.188979 containerd[1889]: time="2025-07-15T04:41:53.188911397Z" level=info msg="TearDown network for sandbox \"36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b\" successfully" Jul 15 04:41:53.188979 containerd[1889]: time="2025-07-15T04:41:53.188971903Z" level=info msg="StopPodSandbox for \"36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b\" returns successfully" Jul 15 04:41:53.394565 kubelet[3397]: I0715 04:41:53.394445 3397 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d891310b-e597-4d2a-831e-002c63441df3-calico-apiserver-certs\") pod \"d891310b-e597-4d2a-831e-002c63441df3\" (UID: \"d891310b-e597-4d2a-831e-002c63441df3\") " Jul 15 04:41:53.394565 kubelet[3397]: I0715 04:41:53.394501 3397 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m92bx\" (UniqueName: \"kubernetes.io/projected/d891310b-e597-4d2a-831e-002c63441df3-kube-api-access-m92bx\") pod \"d891310b-e597-4d2a-831e-002c63441df3\" (UID: \"d891310b-e597-4d2a-831e-002c63441df3\") " Jul 15 04:41:53.401387 systemd[1]: var-lib-kubelet-pods-d891310b\x2de597\x2d4d2a\x2d831e\x2d002c63441df3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm92bx.mount: Deactivated successfully. Jul 15 04:41:53.403182 kubelet[3397]: I0715 04:41:53.403129 3397 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d891310b-e597-4d2a-831e-002c63441df3-kube-api-access-m92bx" (OuterVolumeSpecName: "kube-api-access-m92bx") pod "d891310b-e597-4d2a-831e-002c63441df3" (UID: "d891310b-e597-4d2a-831e-002c63441df3"). InnerVolumeSpecName "kube-api-access-m92bx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 15 04:41:53.404723 kubelet[3397]: I0715 04:41:53.404495 3397 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d891310b-e597-4d2a-831e-002c63441df3-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "d891310b-e597-4d2a-831e-002c63441df3" (UID: "d891310b-e597-4d2a-831e-002c63441df3"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 15 04:41:53.495894 kubelet[3397]: I0715 04:41:53.495844 3397 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d891310b-e597-4d2a-831e-002c63441df3-calico-apiserver-certs\") on node \"ci-4396.0.0-n-9104e8bf1a\" DevicePath \"\"" Jul 15 04:41:53.496128 kubelet[3397]: I0715 04:41:53.496047 3397 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m92bx\" (UniqueName: \"kubernetes.io/projected/d891310b-e597-4d2a-831e-002c63441df3-kube-api-access-m92bx\") on node \"ci-4396.0.0-n-9104e8bf1a\" DevicePath \"\"" Jul 15 04:41:53.820841 kubelet[3397]: I0715 04:41:53.819854 3397 scope.go:117] "RemoveContainer" containerID="371afd796d1782c626caaf69ae6438a1ace03d0066c2ad365ee613fcd8e4f432" Jul 15 04:41:53.821731 containerd[1889]: time="2025-07-15T04:41:53.821518409Z" level=info msg="RemoveContainer for \"371afd796d1782c626caaf69ae6438a1ace03d0066c2ad365ee613fcd8e4f432\"" Jul 15 04:41:53.827074 systemd[1]: Removed slice kubepods-besteffort-podd891310b_e597_4d2a_831e_002c63441df3.slice - libcontainer container kubepods-besteffort-podd891310b_e597_4d2a_831e_002c63441df3.slice. Jul 15 04:41:53.832831 systemd[1]: var-lib-kubelet-pods-d891310b\x2de597\x2d4d2a\x2d831e\x2d002c63441df3-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jul 15 04:41:53.839815 containerd[1889]: time="2025-07-15T04:41:53.839692448Z" level=info msg="RemoveContainer for \"371afd796d1782c626caaf69ae6438a1ace03d0066c2ad365ee613fcd8e4f432\" returns successfully" Jul 15 04:41:53.840139 kubelet[3397]: I0715 04:41:53.840044 3397 scope.go:117] "RemoveContainer" containerID="371afd796d1782c626caaf69ae6438a1ace03d0066c2ad365ee613fcd8e4f432" Jul 15 04:41:53.840310 containerd[1889]: time="2025-07-15T04:41:53.840274267Z" level=error msg="ContainerStatus for \"371afd796d1782c626caaf69ae6438a1ace03d0066c2ad365ee613fcd8e4f432\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"371afd796d1782c626caaf69ae6438a1ace03d0066c2ad365ee613fcd8e4f432\": not found" Jul 15 04:41:53.840440 kubelet[3397]: E0715 04:41:53.840422 3397 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"371afd796d1782c626caaf69ae6438a1ace03d0066c2ad365ee613fcd8e4f432\": not found" containerID="371afd796d1782c626caaf69ae6438a1ace03d0066c2ad365ee613fcd8e4f432" Jul 15 04:41:53.840822 kubelet[3397]: I0715 04:41:53.840549 3397 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"371afd796d1782c626caaf69ae6438a1ace03d0066c2ad365ee613fcd8e4f432"} err="failed to get container status \"371afd796d1782c626caaf69ae6438a1ace03d0066c2ad365ee613fcd8e4f432\": rpc error: code = NotFound desc = an error occurred when try to find container \"371afd796d1782c626caaf69ae6438a1ace03d0066c2ad365ee613fcd8e4f432\": not found" Jul 15 04:41:54.474709 kubelet[3397]: I0715 04:41:54.474485 3397 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d891310b-e597-4d2a-831e-002c63441df3" path="/var/lib/kubelet/pods/d891310b-e597-4d2a-831e-002c63441df3/volumes" Jul 15 04:42:01.004105 containerd[1889]: time="2025-07-15T04:42:01.004060074Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7bf28934529e444fcdce0afdf24f2e16f63a1d190ae4d565d264f23d079a413e\" id:\"4f7e7651af47742c4e269f15e63cd8e4b3f86dacb4fa12dc9d3d05f955e7538b\" pid:6363 exited_at:{seconds:1752554521 nanos:3732223}" Jul 15 04:42:01.195052 containerd[1889]: time="2025-07-15T04:42:01.195012083Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1e00f28fc1e67a02a612acff9db1ec1a3df4620ba302182ed860aaa48f6c8256\" id:\"c57af9a67f7cb54a2e3b076724ce163bc67174dda4400c9b43060cbb3efbca22\" pid:6385 exited_at:{seconds:1752554521 nanos:194669224}" Jul 15 04:42:11.141427 containerd[1889]: time="2025-07-15T04:42:11.141360017Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7bf28934529e444fcdce0afdf24f2e16f63a1d190ae4d565d264f23d079a413e\" id:\"bd1f78f400eee33dc7d916c78da16c3ca12c6c7f1c04b0c3e2a2d6c0f3729912\" pid:6410 exited_at:{seconds:1752554531 nanos:141052792}" Jul 15 04:42:13.370084 containerd[1889]: time="2025-07-15T04:42:13.370020443Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c0d25990b4de3a820bc3720ff72d4f48f3cca443e9425deb22155493046d0ee\" id:\"1d0505ecd6309a8dd6f927905037bbae5ea15007cf7fdf1715ad8021e94c7872\" pid:6433 exited_at:{seconds:1752554533 nanos:369230166}" Jul 15 04:42:15.039940 systemd[1]: Started sshd@7-10.200.20.21:22-10.200.16.10:47582.service - OpenSSH per-connection server daemon (10.200.16.10:47582). Jul 15 04:42:15.547279 sshd[6447]: Accepted publickey for core from 10.200.16.10 port 47582 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:42:15.548814 sshd-session[6447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:15.552463 systemd-logind[1868]: New session 10 of user core. Jul 15 04:42:15.560845 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 15 04:42:15.977622 sshd[6450]: Connection closed by 10.200.16.10 port 47582 Jul 15 04:42:15.978166 sshd-session[6447]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:15.983373 systemd[1]: sshd@7-10.200.20.21:22-10.200.16.10:47582.service: Deactivated successfully. Jul 15 04:42:15.983623 systemd-logind[1868]: Session 10 logged out. Waiting for processes to exit. Jul 15 04:42:15.987393 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 04:42:15.989461 systemd-logind[1868]: Removed session 10. Jul 15 04:42:21.080268 systemd[1]: Started sshd@8-10.200.20.21:22-10.200.16.10:33516.service - OpenSSH per-connection server daemon (10.200.16.10:33516). Jul 15 04:42:21.566219 sshd[6463]: Accepted publickey for core from 10.200.16.10 port 33516 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:42:21.568797 sshd-session[6463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:21.574004 systemd-logind[1868]: New session 11 of user core. Jul 15 04:42:21.580849 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 15 04:42:21.954091 sshd[6467]: Connection closed by 10.200.16.10 port 33516 Jul 15 04:42:21.954851 sshd-session[6463]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:21.958451 systemd[1]: sshd@8-10.200.20.21:22-10.200.16.10:33516.service: Deactivated successfully. Jul 15 04:42:21.960147 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 04:42:21.962203 systemd-logind[1868]: Session 11 logged out. Waiting for processes to exit. Jul 15 04:42:21.963470 systemd-logind[1868]: Removed session 11. Jul 15 04:42:27.041163 systemd[1]: Started sshd@9-10.200.20.21:22-10.200.16.10:33530.service - OpenSSH per-connection server daemon (10.200.16.10:33530). Jul 15 04:42:27.519856 sshd[6486]: Accepted publickey for core from 10.200.16.10 port 33530 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:42:27.521105 sshd-session[6486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:27.524973 systemd-logind[1868]: New session 12 of user core. Jul 15 04:42:27.530865 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 15 04:42:27.906845 sshd[6489]: Connection closed by 10.200.16.10 port 33530 Jul 15 04:42:27.906662 sshd-session[6486]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:27.910132 systemd[1]: sshd@9-10.200.20.21:22-10.200.16.10:33530.service: Deactivated successfully. Jul 15 04:42:27.912008 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 04:42:27.913688 systemd-logind[1868]: Session 12 logged out. Waiting for processes to exit. Jul 15 04:42:27.915227 systemd-logind[1868]: Removed session 12. Jul 15 04:42:27.995252 systemd[1]: Started sshd@10-10.200.20.21:22-10.200.16.10:33536.service - OpenSSH per-connection server daemon (10.200.16.10:33536). Jul 15 04:42:28.489614 sshd[6502]: Accepted publickey for core from 10.200.16.10 port 33536 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:42:28.491090 sshd-session[6502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:28.494269 containerd[1889]: time="2025-07-15T04:42:28.494035810Z" level=info msg="StopPodSandbox for \"36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b\"" Jul 15 04:42:28.498727 systemd-logind[1868]: New session 13 of user core. Jul 15 04:42:28.501835 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 15 04:42:28.549431 containerd[1889]: 2025-07-15 04:42:28.520 [WARNING][6515] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--95btq-eth0" Jul 15 04:42:28.549431 containerd[1889]: 2025-07-15 04:42:28.520 [INFO][6515] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" Jul 15 04:42:28.549431 containerd[1889]: 2025-07-15 04:42:28.520 [INFO][6515] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" iface="eth0" netns="" Jul 15 04:42:28.549431 containerd[1889]: 2025-07-15 04:42:28.520 [INFO][6515] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" Jul 15 04:42:28.549431 containerd[1889]: 2025-07-15 04:42:28.520 [INFO][6515] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" Jul 15 04:42:28.549431 containerd[1889]: 2025-07-15 04:42:28.535 [INFO][6523] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" HandleID="k8s-pod-network.36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--95btq-eth0" Jul 15 04:42:28.549431 containerd[1889]: 2025-07-15 04:42:28.535 [INFO][6523] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:42:28.549431 containerd[1889]: 2025-07-15 04:42:28.535 [INFO][6523] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:42:28.549431 containerd[1889]: 2025-07-15 04:42:28.541 [WARNING][6523] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" HandleID="k8s-pod-network.36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--95btq-eth0" Jul 15 04:42:28.549431 containerd[1889]: 2025-07-15 04:42:28.541 [INFO][6523] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" HandleID="k8s-pod-network.36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--95btq-eth0" Jul 15 04:42:28.549431 containerd[1889]: 2025-07-15 04:42:28.542 [INFO][6523] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:42:28.549431 containerd[1889]: 2025-07-15 04:42:28.545 [INFO][6515] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" Jul 15 04:42:28.549758 containerd[1889]: time="2025-07-15T04:42:28.549575788Z" level=info msg="TearDown network for sandbox \"36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b\" successfully" Jul 15 04:42:28.549758 containerd[1889]: time="2025-07-15T04:42:28.549613909Z" level=info msg="StopPodSandbox for \"36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b\" returns successfully" Jul 15 04:42:28.552293 containerd[1889]: time="2025-07-15T04:42:28.552262394Z" level=info msg="RemovePodSandbox for \"36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b\"" Jul 15 04:42:28.552351 containerd[1889]: time="2025-07-15T04:42:28.552305644Z" level=info msg="Forcibly stopping sandbox \"36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b\"" Jul 15 04:42:28.606936 containerd[1889]: 2025-07-15 04:42:28.585 [WARNING][6537] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--95btq-eth0" Jul 15 04:42:28.606936 containerd[1889]: 2025-07-15 04:42:28.585 [INFO][6537] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" Jul 15 04:42:28.606936 containerd[1889]: 2025-07-15 04:42:28.585 [INFO][6537] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" iface="eth0" netns="" Jul 15 04:42:28.606936 containerd[1889]: 2025-07-15 04:42:28.585 [INFO][6537] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" Jul 15 04:42:28.606936 containerd[1889]: 2025-07-15 04:42:28.585 [INFO][6537] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" Jul 15 04:42:28.606936 containerd[1889]: 2025-07-15 04:42:28.598 [INFO][6544] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" HandleID="k8s-pod-network.36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--95btq-eth0" Jul 15 04:42:28.606936 containerd[1889]: 2025-07-15 04:42:28.598 [INFO][6544] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:42:28.606936 containerd[1889]: 2025-07-15 04:42:28.598 [INFO][6544] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:42:28.606936 containerd[1889]: 2025-07-15 04:42:28.603 [WARNING][6544] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" HandleID="k8s-pod-network.36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--95btq-eth0" Jul 15 04:42:28.606936 containerd[1889]: 2025-07-15 04:42:28.603 [INFO][6544] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" HandleID="k8s-pod-network.36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--95btq-eth0" Jul 15 04:42:28.606936 containerd[1889]: 2025-07-15 04:42:28.604 [INFO][6544] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:42:28.606936 containerd[1889]: 2025-07-15 04:42:28.605 [INFO][6537] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b" Jul 15 04:42:28.607237 containerd[1889]: time="2025-07-15T04:42:28.606985892Z" level=info msg="TearDown network for sandbox \"36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b\" successfully" Jul 15 04:42:28.608348 containerd[1889]: time="2025-07-15T04:42:28.608324435Z" level=info msg="Ensure that sandbox 36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b in task-service has been cleanup successfully" Jul 15 04:42:28.622278 containerd[1889]: time="2025-07-15T04:42:28.622236199Z" level=info msg="RemovePodSandbox \"36ea031ffdcdac12e26dca92b666f168528088e40cfd5e7b04ff96ea43eeae1b\" returns successfully" Jul 15 04:42:28.623045 containerd[1889]: time="2025-07-15T04:42:28.622795311Z" level=info msg="StopPodSandbox for \"83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173\"" Jul 15 04:42:28.667897 containerd[1889]: 2025-07-15 04:42:28.646 [WARNING][6558] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--dcrr6-eth0" Jul 15 04:42:28.667897 containerd[1889]: 2025-07-15 04:42:28.646 [INFO][6558] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" Jul 15 04:42:28.667897 containerd[1889]: 2025-07-15 04:42:28.646 [INFO][6558] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" iface="eth0" netns="" Jul 15 04:42:28.667897 containerd[1889]: 2025-07-15 04:42:28.646 [INFO][6558] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" Jul 15 04:42:28.667897 containerd[1889]: 2025-07-15 04:42:28.646 [INFO][6558] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" Jul 15 04:42:28.667897 containerd[1889]: 2025-07-15 04:42:28.659 [INFO][6565] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" HandleID="k8s-pod-network.83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--dcrr6-eth0" Jul 15 04:42:28.667897 containerd[1889]: 2025-07-15 04:42:28.659 [INFO][6565] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:42:28.667897 containerd[1889]: 2025-07-15 04:42:28.659 [INFO][6565] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:42:28.667897 containerd[1889]: 2025-07-15 04:42:28.664 [WARNING][6565] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" HandleID="k8s-pod-network.83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--dcrr6-eth0" Jul 15 04:42:28.667897 containerd[1889]: 2025-07-15 04:42:28.664 [INFO][6565] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" HandleID="k8s-pod-network.83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--dcrr6-eth0" Jul 15 04:42:28.667897 containerd[1889]: 2025-07-15 04:42:28.665 [INFO][6565] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:42:28.667897 containerd[1889]: 2025-07-15 04:42:28.666 [INFO][6558] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" Jul 15 04:42:28.668421 containerd[1889]: time="2025-07-15T04:42:28.668272845Z" level=info msg="TearDown network for sandbox \"83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173\" successfully" Jul 15 04:42:28.668421 containerd[1889]: time="2025-07-15T04:42:28.668302126Z" level=info msg="StopPodSandbox for \"83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173\" returns successfully" Jul 15 04:42:28.668992 containerd[1889]: time="2025-07-15T04:42:28.668974009Z" level=info msg="RemovePodSandbox for \"83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173\"" Jul 15 04:42:28.669287 containerd[1889]: time="2025-07-15T04:42:28.669082900Z" level=info msg="Forcibly stopping sandbox \"83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173\"" Jul 15 04:42:28.715559 containerd[1889]: 2025-07-15 04:42:28.692 [WARNING][6579] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" WorkloadEndpoint="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--dcrr6-eth0" Jul 15 04:42:28.715559 containerd[1889]: 2025-07-15 04:42:28.692 [INFO][6579] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" Jul 15 04:42:28.715559 containerd[1889]: 2025-07-15 04:42:28.692 [INFO][6579] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" iface="eth0" netns="" Jul 15 04:42:28.715559 containerd[1889]: 2025-07-15 04:42:28.693 [INFO][6579] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" Jul 15 04:42:28.715559 containerd[1889]: 2025-07-15 04:42:28.693 [INFO][6579] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" Jul 15 04:42:28.715559 containerd[1889]: 2025-07-15 04:42:28.706 [INFO][6587] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" HandleID="k8s-pod-network.83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--dcrr6-eth0" Jul 15 04:42:28.715559 containerd[1889]: 2025-07-15 04:42:28.707 [INFO][6587] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:42:28.715559 containerd[1889]: 2025-07-15 04:42:28.707 [INFO][6587] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:42:28.715559 containerd[1889]: 2025-07-15 04:42:28.711 [WARNING][6587] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" HandleID="k8s-pod-network.83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--dcrr6-eth0" Jul 15 04:42:28.715559 containerd[1889]: 2025-07-15 04:42:28.711 [INFO][6587] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" HandleID="k8s-pod-network.83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" Workload="ci--4396.0.0--n--9104e8bf1a-k8s-calico--apiserver--65866b6cfd--dcrr6-eth0" Jul 15 04:42:28.715559 containerd[1889]: 2025-07-15 04:42:28.713 [INFO][6587] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:42:28.715559 containerd[1889]: 2025-07-15 04:42:28.714 [INFO][6579] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173" Jul 15 04:42:28.715859 containerd[1889]: time="2025-07-15T04:42:28.715614065Z" level=info msg="TearDown network for sandbox \"83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173\" successfully" Jul 15 04:42:28.716993 containerd[1889]: time="2025-07-15T04:42:28.716970272Z" level=info msg="Ensure that sandbox 83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173 in task-service has been cleanup successfully" Jul 15 04:42:28.735028 containerd[1889]: time="2025-07-15T04:42:28.734990139Z" level=info msg="RemovePodSandbox \"83ede7a8a80794635e310e519fdbf94e3aa78400b9fa49e3dcd708bc461cd173\" returns successfully" Jul 15 04:42:28.923766 sshd[6520]: Connection closed by 10.200.16.10 port 33536 Jul 15 04:42:28.924491 sshd-session[6502]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:28.928060 systemd-logind[1868]: Session 13 logged out. Waiting for processes to exit. Jul 15 04:42:28.928701 systemd[1]: sshd@10-10.200.20.21:22-10.200.16.10:33536.service: Deactivated successfully. Jul 15 04:42:28.932549 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 04:42:28.936236 systemd-logind[1868]: Removed session 13. Jul 15 04:42:29.008136 systemd[1]: Started sshd@11-10.200.20.21:22-10.200.16.10:33542.service - OpenSSH per-connection server daemon (10.200.16.10:33542). Jul 15 04:42:29.493725 sshd[6602]: Accepted publickey for core from 10.200.16.10 port 33542 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:42:29.494569 sshd-session[6602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:29.498522 systemd-logind[1868]: New session 14 of user core. Jul 15 04:42:29.505869 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 15 04:42:29.883224 sshd[6605]: Connection closed by 10.200.16.10 port 33542 Jul 15 04:42:29.883951 sshd-session[6602]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:29.886979 systemd[1]: sshd@11-10.200.20.21:22-10.200.16.10:33542.service: Deactivated successfully. Jul 15 04:42:29.889088 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 04:42:29.890183 systemd-logind[1868]: Session 14 logged out. Waiting for processes to exit. Jul 15 04:42:29.891817 systemd-logind[1868]: Removed session 14. Jul 15 04:42:31.005091 containerd[1889]: time="2025-07-15T04:42:31.005051557Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7bf28934529e444fcdce0afdf24f2e16f63a1d190ae4d565d264f23d079a413e\" id:\"875678958b2e23fe07ed0e2f0eaa88d1d9cfb12bf4d2428da7d62becf560bfec\" pid:6627 exited_at:{seconds:1752554551 nanos:4136035}" Jul 15 04:42:31.206963 containerd[1889]: time="2025-07-15T04:42:31.206922681Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1e00f28fc1e67a02a612acff9db1ec1a3df4620ba302182ed860aaa48f6c8256\" id:\"78be17955329f8519dfb29492407b347251beb91b278343a83b0b24830e330ec\" pid:6651 exited_at:{seconds:1752554551 nanos:206482588}" Jul 15 04:42:34.967056 systemd[1]: Started sshd@12-10.200.20.21:22-10.200.16.10:49606.service - OpenSSH per-connection server daemon (10.200.16.10:49606). Jul 15 04:42:35.422055 sshd[6667]: Accepted publickey for core from 10.200.16.10 port 49606 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:42:35.423196 sshd-session[6667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:35.426992 systemd-logind[1868]: New session 15 of user core. Jul 15 04:42:35.439870 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 15 04:42:35.798473 sshd[6670]: Connection closed by 10.200.16.10 port 49606 Jul 15 04:42:35.798298 sshd-session[6667]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:35.801638 systemd[1]: sshd@12-10.200.20.21:22-10.200.16.10:49606.service: Deactivated successfully. Jul 15 04:42:35.803540 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 04:42:35.804460 systemd-logind[1868]: Session 15 logged out. Waiting for processes to exit. Jul 15 04:42:35.806102 systemd-logind[1868]: Removed session 15. Jul 15 04:42:37.817881 containerd[1889]: time="2025-07-15T04:42:37.817844936Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1e00f28fc1e67a02a612acff9db1ec1a3df4620ba302182ed860aaa48f6c8256\" id:\"778a01dd4aead04a266410a8cedf7d120f9bf05cdbed20cb5d4cb496e9410de8\" pid:6694 exited_at:{seconds:1752554557 nanos:817611561}" Jul 15 04:42:40.885938 systemd[1]: Started sshd@13-10.200.20.21:22-10.200.16.10:56394.service - OpenSSH per-connection server daemon (10.200.16.10:56394). Jul 15 04:42:41.365338 sshd[6725]: Accepted publickey for core from 10.200.16.10 port 56394 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:42:41.366436 sshd-session[6725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:41.371125 systemd-logind[1868]: New session 16 of user core. Jul 15 04:42:41.377883 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 15 04:42:41.753229 sshd[6728]: Connection closed by 10.200.16.10 port 56394 Jul 15 04:42:41.753834 sshd-session[6725]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:41.757687 systemd[1]: sshd@13-10.200.20.21:22-10.200.16.10:56394.service: Deactivated successfully. Jul 15 04:42:41.760331 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 04:42:41.762341 systemd-logind[1868]: Session 16 logged out. Waiting for processes to exit. Jul 15 04:42:41.763412 systemd-logind[1868]: Removed session 16. Jul 15 04:42:43.365562 containerd[1889]: time="2025-07-15T04:42:43.365510390Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c0d25990b4de3a820bc3720ff72d4f48f3cca443e9425deb22155493046d0ee\" id:\"fd1081caa4ef157d1f31f826a3909c29be460b298e243fb7d5023bd0623014a2\" pid:6750 exited_at:{seconds:1752554563 nanos:365086361}" Jul 15 04:42:46.845213 systemd[1]: Started sshd@14-10.200.20.21:22-10.200.16.10:56410.service - OpenSSH per-connection server daemon (10.200.16.10:56410). Jul 15 04:42:47.329179 sshd[6762]: Accepted publickey for core from 10.200.16.10 port 56410 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:42:47.330900 sshd-session[6762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:47.336124 systemd-logind[1868]: New session 17 of user core. Jul 15 04:42:47.339266 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 15 04:42:47.737444 sshd[6765]: Connection closed by 10.200.16.10 port 56410 Jul 15 04:42:47.739022 sshd-session[6762]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:47.743335 systemd[1]: sshd@14-10.200.20.21:22-10.200.16.10:56410.service: Deactivated successfully. Jul 15 04:42:47.746623 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 04:42:47.749444 systemd-logind[1868]: Session 17 logged out. Waiting for processes to exit. Jul 15 04:42:47.750464 systemd-logind[1868]: Removed session 17. Jul 15 04:42:47.826843 systemd[1]: Started sshd@15-10.200.20.21:22-10.200.16.10:56426.service - OpenSSH per-connection server daemon (10.200.16.10:56426). Jul 15 04:42:48.323127 sshd[6777]: Accepted publickey for core from 10.200.16.10 port 56426 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:42:48.324316 sshd-session[6777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:48.331766 systemd-logind[1868]: New session 18 of user core. Jul 15 04:42:48.336042 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 15 04:42:48.842173 sshd[6780]: Connection closed by 10.200.16.10 port 56426 Jul 15 04:42:48.843323 sshd-session[6777]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:48.846257 systemd-logind[1868]: Session 18 logged out. Waiting for processes to exit. Jul 15 04:42:48.847415 systemd[1]: sshd@15-10.200.20.21:22-10.200.16.10:56426.service: Deactivated successfully. Jul 15 04:42:48.850750 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 04:42:48.852652 systemd-logind[1868]: Removed session 18. Jul 15 04:42:48.931544 systemd[1]: Started sshd@16-10.200.20.21:22-10.200.16.10:56434.service - OpenSSH per-connection server daemon (10.200.16.10:56434). Jul 15 04:42:49.416401 sshd[6790]: Accepted publickey for core from 10.200.16.10 port 56434 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:42:49.418014 sshd-session[6790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:49.423406 systemd-logind[1868]: New session 19 of user core. Jul 15 04:42:49.428833 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 15 04:42:51.130362 sshd[6793]: Connection closed by 10.200.16.10 port 56434 Jul 15 04:42:51.130938 sshd-session[6790]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:51.134392 systemd[1]: sshd@16-10.200.20.21:22-10.200.16.10:56434.service: Deactivated successfully. Jul 15 04:42:51.136135 systemd[1]: session-19.scope: Deactivated successfully. Jul 15 04:42:51.136335 systemd[1]: session-19.scope: Consumed 344ms CPU time, 73.3M memory peak. Jul 15 04:42:51.137782 systemd-logind[1868]: Session 19 logged out. Waiting for processes to exit. Jul 15 04:42:51.139121 systemd-logind[1868]: Removed session 19. Jul 15 04:42:51.212736 systemd[1]: Started sshd@17-10.200.20.21:22-10.200.16.10:38474.service - OpenSSH per-connection server daemon (10.200.16.10:38474). Jul 15 04:42:51.672368 sshd[6810]: Accepted publickey for core from 10.200.16.10 port 38474 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:42:51.673954 sshd-session[6810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:51.679789 systemd-logind[1868]: New session 20 of user core. Jul 15 04:42:51.686900 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 15 04:42:52.133767 sshd[6813]: Connection closed by 10.200.16.10 port 38474 Jul 15 04:42:52.134369 sshd-session[6810]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:52.137790 systemd[1]: sshd@17-10.200.20.21:22-10.200.16.10:38474.service: Deactivated successfully. Jul 15 04:42:52.139397 systemd[1]: session-20.scope: Deactivated successfully. Jul 15 04:42:52.140187 systemd-logind[1868]: Session 20 logged out. Waiting for processes to exit. Jul 15 04:42:52.142242 systemd-logind[1868]: Removed session 20. Jul 15 04:42:52.226672 systemd[1]: Started sshd@18-10.200.20.21:22-10.200.16.10:38480.service - OpenSSH per-connection server daemon (10.200.16.10:38480). Jul 15 04:42:52.722645 sshd[6823]: Accepted publickey for core from 10.200.16.10 port 38480 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:42:52.723809 sshd-session[6823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:52.727470 systemd-logind[1868]: New session 21 of user core. Jul 15 04:42:52.732971 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 15 04:42:53.115180 sshd[6826]: Connection closed by 10.200.16.10 port 38480 Jul 15 04:42:53.115930 sshd-session[6823]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:53.120523 systemd[1]: sshd@18-10.200.20.21:22-10.200.16.10:38480.service: Deactivated successfully. Jul 15 04:42:53.122574 systemd[1]: session-21.scope: Deactivated successfully. Jul 15 04:42:53.124054 systemd-logind[1868]: Session 21 logged out. Waiting for processes to exit. Jul 15 04:42:53.125354 systemd-logind[1868]: Removed session 21. Jul 15 04:42:58.202374 systemd[1]: Started sshd@19-10.200.20.21:22-10.200.16.10:38482.service - OpenSSH per-connection server daemon (10.200.16.10:38482). Jul 15 04:42:58.662097 sshd[6842]: Accepted publickey for core from 10.200.16.10 port 38482 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:42:58.663228 sshd-session[6842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:58.667269 systemd-logind[1868]: New session 22 of user core. Jul 15 04:42:58.670851 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 15 04:42:59.036017 sshd[6845]: Connection closed by 10.200.16.10 port 38482 Jul 15 04:42:59.035360 sshd-session[6842]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:59.038454 systemd-logind[1868]: Session 22 logged out. Waiting for processes to exit. Jul 15 04:42:59.038625 systemd[1]: sshd@19-10.200.20.21:22-10.200.16.10:38482.service: Deactivated successfully. Jul 15 04:42:59.040539 systemd[1]: session-22.scope: Deactivated successfully. Jul 15 04:42:59.042763 systemd-logind[1868]: Removed session 22. Jul 15 04:43:01.003888 containerd[1889]: time="2025-07-15T04:43:01.003845557Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7bf28934529e444fcdce0afdf24f2e16f63a1d190ae4d565d264f23d079a413e\" id:\"34c7ec7e4bcf282f36a23c494ad25ab4b2448f28f7d1cd86b58d7d0f54cb4ecd\" pid:6867 exited_at:{seconds:1752554581 nanos:3381824}" Jul 15 04:43:01.199223 containerd[1889]: time="2025-07-15T04:43:01.199166431Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1e00f28fc1e67a02a612acff9db1ec1a3df4620ba302182ed860aaa48f6c8256\" id:\"e49fe957e889f65d278bd67a0b186b068c0f149422f0ad524f7d987830f0c97a\" pid:6889 exited_at:{seconds:1752554581 nanos:198931112}" Jul 15 04:43:04.122931 systemd[1]: Started sshd@20-10.200.20.21:22-10.200.16.10:50838.service - OpenSSH per-connection server daemon (10.200.16.10:50838). Jul 15 04:43:04.580917 sshd[6900]: Accepted publickey for core from 10.200.16.10 port 50838 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:43:04.582466 sshd-session[6900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:43:04.586890 systemd-logind[1868]: New session 23 of user core. Jul 15 04:43:04.591838 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 15 04:43:04.961404 sshd[6903]: Connection closed by 10.200.16.10 port 50838 Jul 15 04:43:04.962007 sshd-session[6900]: pam_unix(sshd:session): session closed for user core Jul 15 04:43:04.965459 systemd-logind[1868]: Session 23 logged out. Waiting for processes to exit. Jul 15 04:43:04.966189 systemd[1]: sshd@20-10.200.20.21:22-10.200.16.10:50838.service: Deactivated successfully. Jul 15 04:43:04.969203 systemd[1]: session-23.scope: Deactivated successfully. Jul 15 04:43:04.971747 systemd-logind[1868]: Removed session 23. Jul 15 04:43:10.054823 systemd[1]: Started sshd@21-10.200.20.21:22-10.200.16.10:50854.service - OpenSSH per-connection server daemon (10.200.16.10:50854). Jul 15 04:43:10.553803 sshd[6920]: Accepted publickey for core from 10.200.16.10 port 50854 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:43:10.555780 sshd-session[6920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:43:10.559983 systemd-logind[1868]: New session 24 of user core. Jul 15 04:43:10.564880 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 15 04:43:10.984494 sshd[6923]: Connection closed by 10.200.16.10 port 50854 Jul 15 04:43:10.986266 sshd-session[6920]: pam_unix(sshd:session): session closed for user core Jul 15 04:43:10.989471 systemd-logind[1868]: Session 24 logged out. Waiting for processes to exit. Jul 15 04:43:10.990252 systemd[1]: sshd@21-10.200.20.21:22-10.200.16.10:50854.service: Deactivated successfully. Jul 15 04:43:10.992354 systemd[1]: session-24.scope: Deactivated successfully. Jul 15 04:43:10.993778 systemd-logind[1868]: Removed session 24. Jul 15 04:43:11.139840 containerd[1889]: time="2025-07-15T04:43:11.139798424Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7bf28934529e444fcdce0afdf24f2e16f63a1d190ae4d565d264f23d079a413e\" id:\"ca40b4294f85bfa5c82099ad16d6e8fe6b186c1e3207b0bcd1bfbd6de8593120\" pid:6946 exited_at:{seconds:1752554591 nanos:139469758}" Jul 15 04:43:13.363781 containerd[1889]: time="2025-07-15T04:43:13.363732256Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c0d25990b4de3a820bc3720ff72d4f48f3cca443e9425deb22155493046d0ee\" id:\"0a9d1e7ee4ce1ad3c48c700728301f4ab61918aeb2bc1f66e58a934541a28835\" pid:6969 exited_at:{seconds:1752554593 nanos:363452840}" Jul 15 04:43:16.067923 systemd[1]: Started sshd@22-10.200.20.21:22-10.200.16.10:41792.service - OpenSSH per-connection server daemon (10.200.16.10:41792). Jul 15 04:43:16.523278 sshd[6984]: Accepted publickey for core from 10.200.16.10 port 41792 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:43:16.524431 sshd-session[6984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:43:16.528509 systemd-logind[1868]: New session 25 of user core. Jul 15 04:43:16.532855 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 15 04:43:16.895752 sshd[6987]: Connection closed by 10.200.16.10 port 41792 Jul 15 04:43:16.896360 sshd-session[6984]: pam_unix(sshd:session): session closed for user core Jul 15 04:43:16.899704 systemd[1]: sshd@22-10.200.20.21:22-10.200.16.10:41792.service: Deactivated successfully. Jul 15 04:43:16.901587 systemd[1]: session-25.scope: Deactivated successfully. Jul 15 04:43:16.902505 systemd-logind[1868]: Session 25 logged out. Waiting for processes to exit. Jul 15 04:43:16.904656 systemd-logind[1868]: Removed session 25. Jul 15 04:43:21.984080 systemd[1]: Started sshd@23-10.200.20.21:22-10.200.16.10:46764.service - OpenSSH per-connection server daemon (10.200.16.10:46764). Jul 15 04:43:22.477610 sshd[6999]: Accepted publickey for core from 10.200.16.10 port 46764 ssh2: RSA SHA256:jfinOsXBNnbz+C2MuGDJNFkfZ1KTGpoxomAqCFp2paU Jul 15 04:43:22.478747 sshd-session[6999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:43:22.482364 systemd-logind[1868]: New session 26 of user core. Jul 15 04:43:22.489844 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 15 04:43:22.883844 sshd[7002]: Connection closed by 10.200.16.10 port 46764 Jul 15 04:43:22.884392 sshd-session[6999]: pam_unix(sshd:session): session closed for user core Jul 15 04:43:22.887816 systemd[1]: sshd@23-10.200.20.21:22-10.200.16.10:46764.service: Deactivated successfully. Jul 15 04:43:22.889448 systemd[1]: session-26.scope: Deactivated successfully. Jul 15 04:43:22.890437 systemd-logind[1868]: Session 26 logged out. Waiting for processes to exit. Jul 15 04:43:22.891417 systemd-logind[1868]: Removed session 26.