Sep 9 23:41:44.107673 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Sep 9 23:41:44.107692 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Tue Sep 9 22:10:22 -00 2025 Sep 9 23:41:44.107698 kernel: KASLR enabled Sep 9 23:41:44.107702 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Sep 9 23:41:44.107707 kernel: printk: legacy bootconsole [pl11] enabled Sep 9 23:41:44.107711 kernel: efi: EFI v2.7 by EDK II Sep 9 23:41:44.107716 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Sep 9 23:41:44.107720 kernel: random: crng init done Sep 9 23:41:44.107724 kernel: secureboot: Secure boot disabled Sep 9 23:41:44.107728 kernel: ACPI: Early table checksum verification disabled Sep 9 23:41:44.107732 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Sep 9 23:41:44.107736 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 9 23:41:44.107740 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 9 23:41:44.107745 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 9 23:41:44.107750 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 9 23:41:44.107754 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 9 23:41:44.107759 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 9 23:41:44.107763 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 9 23:41:44.107768 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 9 23:41:44.107772 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 9 23:41:44.107776 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Sep 9 23:41:44.107780 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 9 23:41:44.107784 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Sep 9 23:41:44.107788 kernel: ACPI: Use ACPI SPCR as default console: No Sep 9 23:41:44.107793 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Sep 9 23:41:44.107797 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Sep 9 23:41:44.107801 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Sep 9 23:41:44.107805 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Sep 9 23:41:44.107809 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Sep 9 23:41:44.107814 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Sep 9 23:41:44.107818 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Sep 9 23:41:44.107822 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Sep 9 23:41:44.107826 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Sep 9 23:41:44.107831 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Sep 9 23:41:44.107835 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Sep 9 23:41:44.107839 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Sep 9 23:41:44.107843 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Sep 9 23:41:44.107847 kernel: NODE_DATA(0) allocated [mem 0x1bf7fda00-0x1bf804fff] Sep 9 23:41:44.107851 kernel: Zone ranges: Sep 9 23:41:44.107855 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Sep 9 23:41:44.107862 kernel: DMA32 empty Sep 9 23:41:44.107866 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Sep 9 23:41:44.107871 kernel: Device empty Sep 9 23:41:44.107875 kernel: Movable zone start for each node Sep 9 23:41:44.107879 kernel: Early memory node ranges Sep 9 23:41:44.107885 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Sep 9 23:41:44.107889 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Sep 9 23:41:44.107893 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Sep 9 23:41:44.107898 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Sep 9 23:41:44.107902 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Sep 9 23:41:44.107906 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Sep 9 23:41:44.107911 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Sep 9 23:41:44.107915 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Sep 9 23:41:44.107919 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Sep 9 23:41:44.107924 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Sep 9 23:41:44.107928 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Sep 9 23:41:44.107932 kernel: cma: Reserved 16 MiB at 0x000000003d400000 on node -1 Sep 9 23:41:44.107938 kernel: psci: probing for conduit method from ACPI. Sep 9 23:41:44.107942 kernel: psci: PSCIv1.1 detected in firmware. Sep 9 23:41:44.107946 kernel: psci: Using standard PSCI v0.2 function IDs Sep 9 23:41:44.107951 kernel: psci: MIGRATE_INFO_TYPE not supported. Sep 9 23:41:44.107955 kernel: psci: SMC Calling Convention v1.4 Sep 9 23:41:44.107959 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Sep 9 23:41:44.107964 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Sep 9 23:41:44.107968 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 9 23:41:44.107972 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 9 23:41:44.107977 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 9 23:41:44.107981 kernel: Detected PIPT I-cache on CPU0 Sep 9 23:41:44.107986 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Sep 9 23:41:44.107991 kernel: CPU features: detected: GIC system register CPU interface Sep 9 23:41:44.108029 kernel: CPU features: detected: Spectre-v4 Sep 9 23:41:44.108034 kernel: CPU features: detected: Spectre-BHB Sep 9 23:41:44.108038 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 9 23:41:44.108042 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 9 23:41:44.108047 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Sep 9 23:41:44.108051 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 9 23:41:44.108055 kernel: alternatives: applying boot alternatives Sep 9 23:41:44.108061 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=fc7b279c2d918629032c01551b74c66c198cf923a976f9b3bc0d959e7c2302db Sep 9 23:41:44.108065 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 23:41:44.108071 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 23:41:44.108076 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 23:41:44.108080 kernel: Fallback order for Node 0: 0 Sep 9 23:41:44.108084 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Sep 9 23:41:44.108089 kernel: Policy zone: Normal Sep 9 23:41:44.108093 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 23:41:44.108097 kernel: software IO TLB: area num 2. Sep 9 23:41:44.108102 kernel: software IO TLB: mapped [mem 0x0000000036290000-0x000000003a290000] (64MB) Sep 9 23:41:44.108106 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 9 23:41:44.108110 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 23:41:44.108115 kernel: rcu: RCU event tracing is enabled. Sep 9 23:41:44.108121 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 9 23:41:44.108125 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 23:41:44.108130 kernel: Tracing variant of Tasks RCU enabled. Sep 9 23:41:44.108134 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 23:41:44.108138 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 9 23:41:44.108143 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 9 23:41:44.108147 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 9 23:41:44.108152 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 9 23:41:44.108156 kernel: GICv3: 960 SPIs implemented Sep 9 23:41:44.108160 kernel: GICv3: 0 Extended SPIs implemented Sep 9 23:41:44.108165 kernel: Root IRQ handler: gic_handle_irq Sep 9 23:41:44.108169 kernel: GICv3: GICv3 features: 16 PPIs, RSS Sep 9 23:41:44.108174 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Sep 9 23:41:44.108178 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Sep 9 23:41:44.108183 kernel: ITS: No ITS available, not enabling LPIs Sep 9 23:41:44.108187 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 23:41:44.108192 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Sep 9 23:41:44.108196 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 9 23:41:44.108201 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Sep 9 23:41:44.108205 kernel: Console: colour dummy device 80x25 Sep 9 23:41:44.108210 kernel: printk: legacy console [tty1] enabled Sep 9 23:41:44.108214 kernel: ACPI: Core revision 20240827 Sep 9 23:41:44.108219 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Sep 9 23:41:44.108224 kernel: pid_max: default: 32768 minimum: 301 Sep 9 23:41:44.108229 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 23:41:44.108233 kernel: landlock: Up and running. Sep 9 23:41:44.108238 kernel: SELinux: Initializing. Sep 9 23:41:44.108242 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 23:41:44.108250 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 23:41:44.108256 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x1a0000e, misc 0x31e1 Sep 9 23:41:44.108261 kernel: Hyper-V: Host Build 10.0.26100.1261-1-0 Sep 9 23:41:44.108265 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 9 23:41:44.108270 kernel: rcu: Hierarchical SRCU implementation. Sep 9 23:41:44.108275 kernel: rcu: Max phase no-delay instances is 400. Sep 9 23:41:44.108281 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 23:41:44.108286 kernel: Remapping and enabling EFI services. Sep 9 23:41:44.108291 kernel: smp: Bringing up secondary CPUs ... Sep 9 23:41:44.108295 kernel: Detected PIPT I-cache on CPU1 Sep 9 23:41:44.108300 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Sep 9 23:41:44.108305 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Sep 9 23:41:44.108310 kernel: smp: Brought up 1 node, 2 CPUs Sep 9 23:41:44.108315 kernel: SMP: Total of 2 processors activated. Sep 9 23:41:44.108320 kernel: CPU: All CPU(s) started at EL1 Sep 9 23:41:44.108324 kernel: CPU features: detected: 32-bit EL0 Support Sep 9 23:41:44.108329 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Sep 9 23:41:44.108334 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 9 23:41:44.108339 kernel: CPU features: detected: Common not Private translations Sep 9 23:41:44.108344 kernel: CPU features: detected: CRC32 instructions Sep 9 23:41:44.108349 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Sep 9 23:41:44.108355 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 9 23:41:44.108359 kernel: CPU features: detected: LSE atomic instructions Sep 9 23:41:44.108364 kernel: CPU features: detected: Privileged Access Never Sep 9 23:41:44.108369 kernel: CPU features: detected: Speculation barrier (SB) Sep 9 23:41:44.108374 kernel: CPU features: detected: TLB range maintenance instructions Sep 9 23:41:44.108378 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 9 23:41:44.108383 kernel: CPU features: detected: Scalable Vector Extension Sep 9 23:41:44.108388 kernel: alternatives: applying system-wide alternatives Sep 9 23:41:44.108393 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Sep 9 23:41:44.108398 kernel: SVE: maximum available vector length 16 bytes per vector Sep 9 23:41:44.108403 kernel: SVE: default vector length 16 bytes per vector Sep 9 23:41:44.108408 kernel: Memory: 3959668K/4194160K available (11136K kernel code, 2436K rwdata, 9060K rodata, 38912K init, 1038K bss, 213304K reserved, 16384K cma-reserved) Sep 9 23:41:44.108413 kernel: devtmpfs: initialized Sep 9 23:41:44.108417 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 23:41:44.108422 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 9 23:41:44.108427 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 9 23:41:44.108432 kernel: 0 pages in range for non-PLT usage Sep 9 23:41:44.108437 kernel: 508576 pages in range for PLT usage Sep 9 23:41:44.108442 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 23:41:44.108447 kernel: SMBIOS 3.1.0 present. Sep 9 23:41:44.108451 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 9 23:41:44.108456 kernel: DMI: Memory slots populated: 2/2 Sep 9 23:41:44.108461 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 23:41:44.108466 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 9 23:41:44.108471 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 9 23:41:44.108475 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 9 23:41:44.108481 kernel: audit: initializing netlink subsys (disabled) Sep 9 23:41:44.108486 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Sep 9 23:41:44.108490 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 23:41:44.108495 kernel: cpuidle: using governor menu Sep 9 23:41:44.108500 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 9 23:41:44.108505 kernel: ASID allocator initialised with 32768 entries Sep 9 23:41:44.108509 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 23:41:44.108514 kernel: Serial: AMBA PL011 UART driver Sep 9 23:41:44.108519 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 23:41:44.108525 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 23:41:44.108530 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 9 23:41:44.108534 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 9 23:41:44.108539 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 23:41:44.108544 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 23:41:44.108549 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 9 23:41:44.108553 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 9 23:41:44.108558 kernel: ACPI: Added _OSI(Module Device) Sep 9 23:41:44.108563 kernel: ACPI: Added _OSI(Processor Device) Sep 9 23:41:44.108569 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 23:41:44.108573 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 23:41:44.108578 kernel: ACPI: Interpreter enabled Sep 9 23:41:44.108583 kernel: ACPI: Using GIC for interrupt routing Sep 9 23:41:44.108588 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Sep 9 23:41:44.108593 kernel: printk: legacy console [ttyAMA0] enabled Sep 9 23:41:44.108597 kernel: printk: legacy bootconsole [pl11] disabled Sep 9 23:41:44.108602 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Sep 9 23:41:44.108607 kernel: ACPI: CPU0 has been hot-added Sep 9 23:41:44.108613 kernel: ACPI: CPU1 has been hot-added Sep 9 23:41:44.108617 kernel: iommu: Default domain type: Translated Sep 9 23:41:44.108622 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 9 23:41:44.108627 kernel: efivars: Registered efivars operations Sep 9 23:41:44.108632 kernel: vgaarb: loaded Sep 9 23:41:44.108636 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 9 23:41:44.108641 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 23:41:44.108646 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 23:41:44.108651 kernel: pnp: PnP ACPI init Sep 9 23:41:44.108656 kernel: pnp: PnP ACPI: found 0 devices Sep 9 23:41:44.108661 kernel: NET: Registered PF_INET protocol family Sep 9 23:41:44.108666 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 23:41:44.108671 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 23:41:44.108676 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 23:41:44.108680 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 23:41:44.108685 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 23:41:44.108690 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 23:41:44.108695 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 23:41:44.108700 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 23:41:44.108705 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 23:41:44.108710 kernel: PCI: CLS 0 bytes, default 64 Sep 9 23:41:44.108715 kernel: kvm [1]: HYP mode not available Sep 9 23:41:44.108719 kernel: Initialise system trusted keyrings Sep 9 23:41:44.108724 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 23:41:44.108729 kernel: Key type asymmetric registered Sep 9 23:41:44.108733 kernel: Asymmetric key parser 'x509' registered Sep 9 23:41:44.108738 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 9 23:41:44.108744 kernel: io scheduler mq-deadline registered Sep 9 23:41:44.108749 kernel: io scheduler kyber registered Sep 9 23:41:44.108753 kernel: io scheduler bfq registered Sep 9 23:41:44.108758 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 23:41:44.111030 kernel: thunder_xcv, ver 1.0 Sep 9 23:41:44.111045 kernel: thunder_bgx, ver 1.0 Sep 9 23:41:44.111051 kernel: nicpf, ver 1.0 Sep 9 23:41:44.111056 kernel: nicvf, ver 1.0 Sep 9 23:41:44.111197 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 9 23:41:44.111253 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-09T23:41:43 UTC (1757461303) Sep 9 23:41:44.111260 kernel: efifb: probing for efifb Sep 9 23:41:44.111265 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 9 23:41:44.111270 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 9 23:41:44.111275 kernel: efifb: scrolling: redraw Sep 9 23:41:44.111280 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 9 23:41:44.111285 kernel: Console: switching to colour frame buffer device 128x48 Sep 9 23:41:44.111289 kernel: fb0: EFI VGA frame buffer device Sep 9 23:41:44.111296 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Sep 9 23:41:44.111301 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 9 23:41:44.111306 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 9 23:41:44.111311 kernel: NET: Registered PF_INET6 protocol family Sep 9 23:41:44.111316 kernel: watchdog: NMI not fully supported Sep 9 23:41:44.111320 kernel: watchdog: Hard watchdog permanently disabled Sep 9 23:41:44.111325 kernel: Segment Routing with IPv6 Sep 9 23:41:44.111330 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 23:41:44.111335 kernel: NET: Registered PF_PACKET protocol family Sep 9 23:41:44.111341 kernel: Key type dns_resolver registered Sep 9 23:41:44.111346 kernel: registered taskstats version 1 Sep 9 23:41:44.111350 kernel: Loading compiled-in X.509 certificates Sep 9 23:41:44.111356 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: 61217a1897415238555e2058a4e44c51622b0f87' Sep 9 23:41:44.111360 kernel: Demotion targets for Node 0: null Sep 9 23:41:44.111365 kernel: Key type .fscrypt registered Sep 9 23:41:44.111370 kernel: Key type fscrypt-provisioning registered Sep 9 23:41:44.111375 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 23:41:44.111380 kernel: ima: Allocated hash algorithm: sha1 Sep 9 23:41:44.111385 kernel: ima: No architecture policies found Sep 9 23:41:44.111390 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 9 23:41:44.111395 kernel: clk: Disabling unused clocks Sep 9 23:41:44.111400 kernel: PM: genpd: Disabling unused power domains Sep 9 23:41:44.111405 kernel: Warning: unable to open an initial console. Sep 9 23:41:44.111410 kernel: Freeing unused kernel memory: 38912K Sep 9 23:41:44.111415 kernel: Run /init as init process Sep 9 23:41:44.111419 kernel: with arguments: Sep 9 23:41:44.111424 kernel: /init Sep 9 23:41:44.111430 kernel: with environment: Sep 9 23:41:44.111435 kernel: HOME=/ Sep 9 23:41:44.111439 kernel: TERM=linux Sep 9 23:41:44.111444 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 23:41:44.111450 systemd[1]: Successfully made /usr/ read-only. Sep 9 23:41:44.111457 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 23:41:44.111463 systemd[1]: Detected virtualization microsoft. Sep 9 23:41:44.111468 systemd[1]: Detected architecture arm64. Sep 9 23:41:44.111474 systemd[1]: Running in initrd. Sep 9 23:41:44.111479 systemd[1]: No hostname configured, using default hostname. Sep 9 23:41:44.111484 systemd[1]: Hostname set to . Sep 9 23:41:44.111489 systemd[1]: Initializing machine ID from random generator. Sep 9 23:41:44.111495 systemd[1]: Queued start job for default target initrd.target. Sep 9 23:41:44.111500 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 23:41:44.111505 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 23:41:44.111511 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 23:41:44.111518 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 23:41:44.111523 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 23:41:44.111529 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 23:41:44.111535 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 23:41:44.111540 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 23:41:44.111545 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 23:41:44.111551 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 23:41:44.111557 systemd[1]: Reached target paths.target - Path Units. Sep 9 23:41:44.111562 systemd[1]: Reached target slices.target - Slice Units. Sep 9 23:41:44.111567 systemd[1]: Reached target swap.target - Swaps. Sep 9 23:41:44.111572 systemd[1]: Reached target timers.target - Timer Units. Sep 9 23:41:44.111577 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 23:41:44.111583 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 23:41:44.111588 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 23:41:44.111593 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 23:41:44.111599 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 23:41:44.111605 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 23:41:44.111610 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 23:41:44.111615 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 23:41:44.111620 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 23:41:44.111626 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 23:41:44.111631 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 23:41:44.111636 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 23:41:44.111643 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 23:41:44.111648 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 23:41:44.111653 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 23:41:44.111658 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:41:44.111679 systemd-journald[224]: Collecting audit messages is disabled. Sep 9 23:41:44.111693 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 23:41:44.111700 systemd-journald[224]: Journal started Sep 9 23:41:44.111715 systemd-journald[224]: Runtime Journal (/run/log/journal/757325fbcc2243149893249708709360) is 8M, max 78.5M, 70.5M free. Sep 9 23:41:44.115367 systemd-modules-load[226]: Inserted module 'overlay' Sep 9 23:41:44.137021 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 23:41:44.137071 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 23:41:44.144070 kernel: Bridge firewalling registered Sep 9 23:41:44.144170 systemd-modules-load[226]: Inserted module 'br_netfilter' Sep 9 23:41:44.148411 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 23:41:44.159022 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 23:41:44.164506 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 23:41:44.171810 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:41:44.183829 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 23:41:44.192207 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:41:44.214116 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 23:41:44.232423 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 23:41:44.252147 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 23:41:44.259166 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:41:44.275389 systemd-tmpfiles[254]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 23:41:44.280049 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 23:41:44.288245 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 23:41:44.300373 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 23:41:44.317147 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 23:41:44.332066 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 23:41:44.338860 dracut-cmdline[261]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=fc7b279c2d918629032c01551b74c66c198cf923a976f9b3bc0d959e7c2302db Sep 9 23:41:44.374520 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 23:41:44.399049 systemd-resolved[262]: Positive Trust Anchors: Sep 9 23:41:44.399068 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 23:41:44.399089 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 23:41:44.401565 systemd-resolved[262]: Defaulting to hostname 'linux'. Sep 9 23:41:44.402360 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 23:41:44.407281 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 23:41:44.490022 kernel: SCSI subsystem initialized Sep 9 23:41:44.497013 kernel: Loading iSCSI transport class v2.0-870. Sep 9 23:41:44.504039 kernel: iscsi: registered transport (tcp) Sep 9 23:41:44.516956 kernel: iscsi: registered transport (qla4xxx) Sep 9 23:41:44.517031 kernel: QLogic iSCSI HBA Driver Sep 9 23:41:44.535457 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 23:41:44.555485 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 23:41:44.562564 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 23:41:44.619149 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 23:41:44.626142 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 23:41:44.696019 kernel: raid6: neonx8 gen() 18541 MB/s Sep 9 23:41:44.713127 kernel: raid6: neonx4 gen() 18550 MB/s Sep 9 23:41:44.732027 kernel: raid6: neonx2 gen() 17074 MB/s Sep 9 23:41:44.752137 kernel: raid6: neonx1 gen() 15158 MB/s Sep 9 23:41:44.771004 kernel: raid6: int64x8 gen() 10775 MB/s Sep 9 23:41:44.790003 kernel: raid6: int64x4 gen() 10680 MB/s Sep 9 23:41:44.810135 kernel: raid6: int64x2 gen() 9019 MB/s Sep 9 23:41:44.831551 kernel: raid6: int64x1 gen() 7100 MB/s Sep 9 23:41:44.831640 kernel: raid6: using algorithm neonx4 gen() 18550 MB/s Sep 9 23:41:44.853119 kernel: raid6: .... xor() 15149 MB/s, rmw enabled Sep 9 23:41:44.853195 kernel: raid6: using neon recovery algorithm Sep 9 23:41:44.859003 kernel: xor: measuring software checksum speed Sep 9 23:41:44.863606 kernel: 8regs : 27500 MB/sec Sep 9 23:41:44.863612 kernel: 32regs : 29129 MB/sec Sep 9 23:41:44.866288 kernel: arm64_neon : 39002 MB/sec Sep 9 23:41:44.870025 kernel: xor: using function: arm64_neon (39002 MB/sec) Sep 9 23:41:44.911079 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 23:41:44.917945 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 23:41:44.927150 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 23:41:44.949212 systemd-udevd[473]: Using default interface naming scheme 'v255'. Sep 9 23:41:44.952949 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 23:41:44.964740 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 23:41:44.994733 dracut-pre-trigger[482]: rd.md=0: removing MD RAID activation Sep 9 23:41:45.019446 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 23:41:45.025165 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 23:41:45.072419 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 23:41:45.085271 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 23:41:45.146023 kernel: hv_vmbus: Vmbus version:5.3 Sep 9 23:41:45.159019 kernel: hv_vmbus: registering driver hid_hyperv Sep 9 23:41:45.159083 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 9 23:41:45.159298 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 23:41:45.193300 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Sep 9 23:41:45.193328 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 9 23:41:45.193475 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 9 23:41:45.193483 kernel: hv_vmbus: registering driver hv_netvsc Sep 9 23:41:45.193490 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 9 23:41:45.164211 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:41:45.206567 kernel: PTP clock support registered Sep 9 23:41:45.201746 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:41:45.214669 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:41:45.228108 kernel: hv_vmbus: registering driver hv_storvsc Sep 9 23:41:45.228126 kernel: hv_utils: Registering HyperV Utility Driver Sep 9 23:41:45.228133 kernel: hv_vmbus: registering driver hv_utils Sep 9 23:41:45.234598 kernel: hv_utils: Heartbeat IC version 3.0 Sep 9 23:41:45.235258 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 23:41:45.127239 kernel: hv_utils: Shutdown IC version 3.2 Sep 9 23:41:45.135667 kernel: scsi host1: storvsc_host_t Sep 9 23:41:45.135784 kernel: hv_utils: TimeSync IC version 4.0 Sep 9 23:41:45.136738 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Sep 9 23:41:45.136750 systemd-journald[224]: Time jumped backwards, rotating. Sep 9 23:41:45.136782 kernel: scsi host0: storvsc_host_t Sep 9 23:41:45.246141 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 23:41:45.154721 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 9 23:41:45.154774 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Sep 9 23:41:45.154831 kernel: hv_netvsc 00224878-f7f0-0022-4878-f7f000224878 eth0: VF slot 1 added Sep 9 23:41:45.246220 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:41:45.176905 kernel: hv_vmbus: registering driver hv_pci Sep 9 23:41:45.118541 systemd-resolved[262]: Clock change detected. Flushing caches. Sep 9 23:41:45.191284 kernel: hv_pci 5b90a23c-cd95-46ac-a778-8ca263a0fea2: PCI VMBus probing: Using version 0x10004 Sep 9 23:41:45.191435 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 9 23:41:45.144875 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:41:45.207992 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 9 23:41:45.208151 kernel: hv_pci 5b90a23c-cd95-46ac-a778-8ca263a0fea2: PCI host bridge to bus cd95:00 Sep 9 23:41:45.208227 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 9 23:41:45.214068 kernel: pci_bus cd95:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Sep 9 23:41:45.214222 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 9 23:41:45.219148 kernel: pci_bus cd95:00: No busn resource found for root bus, will use [bus 00-ff] Sep 9 23:41:45.219301 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 9 23:41:45.230806 kernel: pci cd95:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Sep 9 23:41:45.230881 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#257 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Sep 9 23:41:45.236562 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:41:45.253946 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Sep 9 23:41:45.254121 kernel: pci cd95:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 9 23:41:45.254147 kernel: pci cd95:00:02.0: enabling Extended Tags Sep 9 23:41:45.270864 kernel: pci cd95:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at cd95:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Sep 9 23:41:45.280316 kernel: pci_bus cd95:00: busn_res: [bus 00-ff] end is updated to 00 Sep 9 23:41:45.280475 kernel: pci cd95:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Sep 9 23:41:45.292193 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 9 23:41:45.292244 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 9 23:41:45.303806 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 9 23:41:45.304035 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 9 23:41:45.304822 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 9 23:41:45.325827 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#226 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 9 23:41:45.350823 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#256 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 9 23:41:45.370682 kernel: mlx5_core cd95:00:02.0: enabling device (0000 -> 0002) Sep 9 23:41:45.378716 kernel: mlx5_core cd95:00:02.0: PTM is not supported by PCIe Sep 9 23:41:45.378851 kernel: mlx5_core cd95:00:02.0: firmware version: 16.30.1278 Sep 9 23:41:45.551336 kernel: hv_netvsc 00224878-f7f0-0022-4878-f7f000224878 eth0: VF registering: eth1 Sep 9 23:41:45.551534 kernel: mlx5_core cd95:00:02.0 eth1: joined to eth0 Sep 9 23:41:45.556818 kernel: mlx5_core cd95:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Sep 9 23:41:45.564822 kernel: mlx5_core cd95:00:02.0 enP52629s1: renamed from eth1 Sep 9 23:41:45.903905 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 9 23:41:45.927424 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 9 23:41:45.945194 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 9 23:41:45.976056 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 9 23:41:45.981756 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 9 23:41:45.994473 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 23:41:46.001528 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 23:41:46.009790 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 23:41:46.018776 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 23:41:46.031969 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 23:41:46.039935 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 23:41:46.060670 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 23:41:46.074787 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#305 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Sep 9 23:41:46.081822 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 9 23:41:47.094013 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#206 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Sep 9 23:41:47.106846 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 9 23:41:47.107848 disk-uuid[660]: The operation has completed successfully. Sep 9 23:41:47.185549 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 23:41:47.185653 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 23:41:47.209759 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 23:41:47.232169 sh[821]: Success Sep 9 23:41:47.270612 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 23:41:47.270678 kernel: device-mapper: uevent: version 1.0.3 Sep 9 23:41:47.275872 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 23:41:47.285821 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 9 23:41:47.658210 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 23:41:47.679490 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 23:41:47.695747 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 23:41:47.720168 kernel: BTRFS: device fsid 2bc16190-0dd5-44d6-b331-3d703f5a1d1f devid 1 transid 40 /dev/mapper/usr (254:0) scanned by mount (839) Sep 9 23:41:47.720218 kernel: BTRFS info (device dm-0): first mount of filesystem 2bc16190-0dd5-44d6-b331-3d703f5a1d1f Sep 9 23:41:47.724510 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:41:48.102972 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 23:41:48.103055 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 23:41:48.187119 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 23:41:48.191201 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 23:41:48.199489 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 23:41:48.200300 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 23:41:48.221535 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 23:41:48.251837 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (865) Sep 9 23:41:48.262825 kernel: BTRFS info (device sda6): first mount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:41:48.262888 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:41:48.315596 kernel: BTRFS info (device sda6): turning on async discard Sep 9 23:41:48.315663 kernel: BTRFS info (device sda6): enabling free space tree Sep 9 23:41:48.334873 kernel: BTRFS info (device sda6): last unmount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:41:48.336292 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 23:41:48.345004 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 23:41:48.371844 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 23:41:48.384822 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 23:41:48.418367 systemd-networkd[1008]: lo: Link UP Sep 9 23:41:48.418379 systemd-networkd[1008]: lo: Gained carrier Sep 9 23:41:48.419117 systemd-networkd[1008]: Enumeration completed Sep 9 23:41:48.420972 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 23:41:48.425678 systemd-networkd[1008]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:41:48.425682 systemd-networkd[1008]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 23:41:48.426129 systemd[1]: Reached target network.target - Network. Sep 9 23:41:48.504023 kernel: mlx5_core cd95:00:02.0 enP52629s1: Link up Sep 9 23:41:48.504309 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 9 23:41:48.539079 kernel: hv_netvsc 00224878-f7f0-0022-4878-f7f000224878 eth0: Data path switched to VF: enP52629s1 Sep 9 23:41:48.538782 systemd-networkd[1008]: enP52629s1: Link UP Sep 9 23:41:48.538868 systemd-networkd[1008]: eth0: Link UP Sep 9 23:41:48.538937 systemd-networkd[1008]: eth0: Gained carrier Sep 9 23:41:48.538953 systemd-networkd[1008]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:41:48.549154 systemd-networkd[1008]: enP52629s1: Gained carrier Sep 9 23:41:48.562852 systemd-networkd[1008]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 9 23:41:49.671684 ignition[989]: Ignition 2.21.0 Sep 9 23:41:49.671697 ignition[989]: Stage: fetch-offline Sep 9 23:41:49.671776 ignition[989]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:41:49.678827 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 23:41:49.671781 ignition[989]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 9 23:41:49.687579 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 9 23:41:49.674556 ignition[989]: parsed url from cmdline: "" Sep 9 23:41:49.674560 ignition[989]: no config URL provided Sep 9 23:41:49.674565 ignition[989]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 23:41:49.674574 ignition[989]: no config at "/usr/lib/ignition/user.ign" Sep 9 23:41:49.674578 ignition[989]: failed to fetch config: resource requires networking Sep 9 23:41:49.674728 ignition[989]: Ignition finished successfully Sep 9 23:41:49.723587 ignition[1020]: Ignition 2.21.0 Sep 9 23:41:49.723607 ignition[1020]: Stage: fetch Sep 9 23:41:49.723768 ignition[1020]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:41:49.723776 ignition[1020]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 9 23:41:49.723906 ignition[1020]: parsed url from cmdline: "" Sep 9 23:41:49.723908 ignition[1020]: no config URL provided Sep 9 23:41:49.723912 ignition[1020]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 23:41:49.723917 ignition[1020]: no config at "/usr/lib/ignition/user.ign" Sep 9 23:41:49.723947 ignition[1020]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 9 23:41:49.832766 ignition[1020]: GET result: OK Sep 9 23:41:49.832846 ignition[1020]: config has been read from IMDS userdata Sep 9 23:41:49.832868 ignition[1020]: parsing config with SHA512: f5e798dcecf584eddae738a233ad5f23ea5b935e1bf9677d958d37bbf0e8ae166cc097d39132506ded7dd41e7bb4f839c2d90dd54a9e0506fae6a4e0d757db4b Sep 9 23:41:49.837034 unknown[1020]: fetched base config from "system" Sep 9 23:41:49.837460 ignition[1020]: fetch: fetch complete Sep 9 23:41:49.837043 unknown[1020]: fetched base config from "system" Sep 9 23:41:49.837467 ignition[1020]: fetch: fetch passed Sep 9 23:41:49.837047 unknown[1020]: fetched user config from "azure" Sep 9 23:41:49.837514 ignition[1020]: Ignition finished successfully Sep 9 23:41:49.841416 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 9 23:41:49.851546 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 23:41:49.887946 ignition[1026]: Ignition 2.21.0 Sep 9 23:41:49.887963 ignition[1026]: Stage: kargs Sep 9 23:41:49.888149 ignition[1026]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:41:49.894262 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 23:41:49.888156 ignition[1026]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 9 23:41:49.903716 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 23:41:49.889353 ignition[1026]: kargs: kargs passed Sep 9 23:41:49.889404 ignition[1026]: Ignition finished successfully Sep 9 23:41:49.937031 ignition[1032]: Ignition 2.21.0 Sep 9 23:41:49.939971 ignition[1032]: Stage: disks Sep 9 23:41:49.940298 ignition[1032]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:41:49.943138 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 23:41:49.940310 ignition[1032]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 9 23:41:49.949010 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 23:41:49.941381 ignition[1032]: disks: disks passed Sep 9 23:41:49.957748 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 23:41:49.941435 ignition[1032]: Ignition finished successfully Sep 9 23:41:49.966538 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 23:41:49.974995 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 23:41:49.983077 systemd[1]: Reached target basic.target - Basic System. Sep 9 23:41:49.990451 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 23:41:50.093465 systemd-fsck[1040]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Sep 9 23:41:50.102830 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 23:41:50.108781 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 23:41:50.351251 systemd-networkd[1008]: eth0: Gained IPv6LL Sep 9 23:41:52.366866 kernel: EXT4-fs (sda9): mounted filesystem 7cc0d7f3-e4a1-4dc4-8b58-ceece0d874c1 r/w with ordered data mode. Quota mode: none. Sep 9 23:41:52.367936 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 23:41:52.371750 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 23:41:52.415424 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 23:41:52.438414 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 23:41:52.451736 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1054) Sep 9 23:41:52.464449 kernel: BTRFS info (device sda6): first mount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:41:52.464479 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:41:52.466712 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 9 23:41:52.487154 kernel: BTRFS info (device sda6): turning on async discard Sep 9 23:41:52.487181 kernel: BTRFS info (device sda6): enabling free space tree Sep 9 23:41:52.477492 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 23:41:52.477550 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 23:41:52.495588 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 23:41:52.507141 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 23:41:52.519062 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 23:41:53.187044 coreos-metadata[1069]: Sep 09 23:41:53.187 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 9 23:41:53.195325 coreos-metadata[1069]: Sep 09 23:41:53.195 INFO Fetch successful Sep 9 23:41:53.199395 coreos-metadata[1069]: Sep 09 23:41:53.199 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 9 23:41:53.209195 coreos-metadata[1069]: Sep 09 23:41:53.209 INFO Fetch successful Sep 9 23:41:53.223695 coreos-metadata[1069]: Sep 09 23:41:53.223 INFO wrote hostname ci-4426.0.0-n-3e4141976f to /sysroot/etc/hostname Sep 9 23:41:53.231885 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 9 23:41:53.465198 initrd-setup-root[1084]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 23:41:53.525380 initrd-setup-root[1091]: cut: /sysroot/etc/group: No such file or directory Sep 9 23:41:53.544316 initrd-setup-root[1098]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 23:41:53.550172 initrd-setup-root[1105]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 23:41:54.820660 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 23:41:54.826566 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 23:41:54.831316 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 23:41:54.857361 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 23:41:54.867808 kernel: BTRFS info (device sda6): last unmount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:41:54.886495 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 23:41:54.897777 ignition[1173]: INFO : Ignition 2.21.0 Sep 9 23:41:54.897777 ignition[1173]: INFO : Stage: mount Sep 9 23:41:54.897777 ignition[1173]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 23:41:54.897777 ignition[1173]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 9 23:41:54.897777 ignition[1173]: INFO : mount: mount passed Sep 9 23:41:54.897777 ignition[1173]: INFO : Ignition finished successfully Sep 9 23:41:54.900968 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 23:41:54.906868 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 23:41:54.936946 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 23:41:54.964989 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1184) Sep 9 23:41:54.965041 kernel: BTRFS info (device sda6): first mount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:41:54.973929 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:41:54.984744 kernel: BTRFS info (device sda6): turning on async discard Sep 9 23:41:54.984768 kernel: BTRFS info (device sda6): enabling free space tree Sep 9 23:41:54.986373 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 23:41:55.013838 ignition[1202]: INFO : Ignition 2.21.0 Sep 9 23:41:55.013838 ignition[1202]: INFO : Stage: files Sep 9 23:41:55.021116 ignition[1202]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 23:41:55.021116 ignition[1202]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 9 23:41:55.021116 ignition[1202]: DEBUG : files: compiled without relabeling support, skipping Sep 9 23:41:55.037439 ignition[1202]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 23:41:55.037439 ignition[1202]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 23:41:55.086304 ignition[1202]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 23:41:55.093051 ignition[1202]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 23:41:55.093051 ignition[1202]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 23:41:55.086744 unknown[1202]: wrote ssh authorized keys file for user: core Sep 9 23:41:55.164048 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 9 23:41:55.172713 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 9 23:41:55.306236 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 23:41:55.616951 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 9 23:41:55.616951 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 23:41:55.633912 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 9 23:41:55.795175 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 9 23:41:55.874622 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 23:41:55.874622 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 9 23:41:55.889005 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 23:41:55.889005 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 23:41:55.889005 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 23:41:55.889005 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 23:41:55.889005 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 23:41:55.889005 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 23:41:55.889005 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 23:41:55.943754 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 23:41:55.943754 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 23:41:55.943754 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 9 23:41:55.943754 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 9 23:41:55.943754 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 9 23:41:55.943754 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 9 23:41:56.386072 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 9 23:41:56.595917 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 9 23:41:56.595917 ignition[1202]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 9 23:41:56.633132 ignition[1202]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 23:41:56.649278 ignition[1202]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 23:41:56.649278 ignition[1202]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 9 23:41:56.649278 ignition[1202]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 9 23:41:56.679009 ignition[1202]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 23:41:56.679009 ignition[1202]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 23:41:56.679009 ignition[1202]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 23:41:56.679009 ignition[1202]: INFO : files: files passed Sep 9 23:41:56.679009 ignition[1202]: INFO : Ignition finished successfully Sep 9 23:41:56.658551 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 23:41:56.667963 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 23:41:56.693391 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 23:41:56.708340 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 23:41:56.708424 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 23:41:56.740717 initrd-setup-root-after-ignition[1231]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 23:41:56.740717 initrd-setup-root-after-ignition[1231]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 23:41:56.754283 initrd-setup-root-after-ignition[1235]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 23:41:56.748770 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 23:41:56.759427 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 23:41:56.770536 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 23:41:56.816725 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 23:41:56.816871 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 23:41:56.826303 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 23:41:56.834906 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 23:41:56.844114 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 23:41:56.845948 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 23:41:56.885152 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 23:41:56.891942 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 23:41:56.915256 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 23:41:56.920665 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 23:41:56.930732 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 23:41:56.939337 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 23:41:56.939450 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 23:41:56.951117 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 23:41:56.960001 systemd[1]: Stopped target basic.target - Basic System. Sep 9 23:41:56.967348 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 23:41:56.975051 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 23:41:56.984511 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 23:41:56.993296 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 23:41:57.002271 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 23:41:57.011054 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 23:41:57.020661 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 23:41:57.030264 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 23:41:57.038036 systemd[1]: Stopped target swap.target - Swaps. Sep 9 23:41:57.045306 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 23:41:57.045423 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 23:41:57.056350 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 23:41:57.061755 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 23:41:57.071127 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 23:41:57.071205 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 23:41:57.080254 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 23:41:57.080363 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 23:41:57.092926 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 23:41:57.093018 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 23:41:57.098433 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 23:41:57.098505 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 23:41:57.108286 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 9 23:41:57.108350 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 9 23:41:57.119080 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 23:41:57.181909 ignition[1255]: INFO : Ignition 2.21.0 Sep 9 23:41:57.181909 ignition[1255]: INFO : Stage: umount Sep 9 23:41:57.181909 ignition[1255]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 23:41:57.181909 ignition[1255]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 9 23:41:57.181909 ignition[1255]: INFO : umount: umount passed Sep 9 23:41:57.181909 ignition[1255]: INFO : Ignition finished successfully Sep 9 23:41:57.133556 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 23:41:57.133694 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 23:41:57.154879 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 23:41:57.170400 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 23:41:57.170542 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 23:41:57.178134 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 23:41:57.178219 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 23:41:57.191013 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 23:41:57.191107 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 23:41:57.204224 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 23:41:57.204331 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 23:41:57.212844 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 23:41:57.212895 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 23:41:57.223927 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 23:41:57.223986 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 23:41:57.233124 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 9 23:41:57.233164 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 9 23:41:57.242221 systemd[1]: Stopped target network.target - Network. Sep 9 23:41:57.251363 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 23:41:57.251446 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 23:41:57.260316 systemd[1]: Stopped target paths.target - Path Units. Sep 9 23:41:57.268836 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 23:41:57.272812 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 23:41:57.278533 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 23:41:57.286105 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 23:41:57.295004 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 23:41:57.295047 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 23:41:57.302518 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 23:41:57.302564 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 23:41:57.311263 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 23:41:57.311325 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 23:41:57.319011 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 23:41:57.319044 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 23:41:57.329122 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 23:41:57.336969 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 23:41:57.349526 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 23:41:57.350132 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 23:41:57.350228 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 23:41:57.362690 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 23:41:57.549730 kernel: hv_netvsc 00224878-f7f0-0022-4878-f7f000224878 eth0: Data path switched from VF: enP52629s1 Sep 9 23:41:57.362937 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 23:41:57.363035 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 23:41:57.374400 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 23:41:57.375987 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 23:41:57.381694 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 23:41:57.381740 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 23:41:57.392063 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 23:41:57.405159 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 23:41:57.405236 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 23:41:57.421605 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 23:41:57.421670 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:41:57.432899 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 23:41:57.432942 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 23:41:57.437451 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 23:41:57.437492 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 23:41:57.450424 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 23:41:57.459774 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 23:41:57.459850 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 23:41:57.494143 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 23:41:57.499915 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 23:41:57.507385 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 23:41:57.507468 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 23:41:57.516715 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 23:41:57.516745 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 23:41:57.533910 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 23:41:57.533971 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 23:41:57.549836 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 23:41:57.549899 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 23:41:57.559614 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 23:41:57.559666 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 23:41:57.575021 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 23:41:57.590377 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 23:41:57.590454 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 23:41:57.600669 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 23:41:57.600714 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 23:41:57.610549 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 23:41:57.610603 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:41:57.619937 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 9 23:41:57.619995 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 9 23:41:57.620026 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 23:41:57.620368 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 23:41:57.621822 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 23:41:57.643266 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 23:41:57.643426 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 23:41:57.696244 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 23:41:57.696421 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 23:41:57.702642 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 23:41:57.710298 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 23:41:57.710379 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 23:41:57.719262 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 23:41:57.764550 systemd[1]: Switching root. Sep 9 23:41:57.847916 systemd-journald[224]: Received SIGTERM from PID 1 (systemd). Sep 9 23:41:57.847955 systemd-journald[224]: Journal stopped Sep 9 23:42:06.280011 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 23:42:06.280110 kernel: SELinux: policy capability open_perms=1 Sep 9 23:42:06.280122 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 23:42:06.280128 kernel: SELinux: policy capability always_check_network=0 Sep 9 23:42:06.280138 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 23:42:06.280143 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 23:42:06.280149 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 23:42:06.280154 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 23:42:06.280160 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 23:42:06.280165 kernel: audit: type=1403 audit(1757461318.992:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 23:42:06.280173 systemd[1]: Successfully loaded SELinux policy in 217.843ms. Sep 9 23:42:06.280181 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.267ms. Sep 9 23:42:06.280188 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 23:42:06.280194 systemd[1]: Detected virtualization microsoft. Sep 9 23:42:06.280200 systemd[1]: Detected architecture arm64. Sep 9 23:42:06.280208 systemd[1]: Detected first boot. Sep 9 23:42:06.280214 systemd[1]: Hostname set to . Sep 9 23:42:06.280220 systemd[1]: Initializing machine ID from random generator. Sep 9 23:42:06.280227 zram_generator::config[1298]: No configuration found. Sep 9 23:42:06.280234 kernel: NET: Registered PF_VSOCK protocol family Sep 9 23:42:06.280239 systemd[1]: Populated /etc with preset unit settings. Sep 9 23:42:06.280899 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 23:42:06.280921 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 23:42:06.280928 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 23:42:06.280935 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 23:42:06.280942 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 23:42:06.280949 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 23:42:06.280955 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 23:42:06.280961 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 23:42:06.280968 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 23:42:06.280975 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 23:42:06.280981 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 23:42:06.280987 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 23:42:06.280993 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 23:42:06.280999 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 23:42:06.281005 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 23:42:06.281011 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 23:42:06.281018 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 23:42:06.281025 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 23:42:06.281031 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 9 23:42:06.281039 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 23:42:06.281045 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 23:42:06.281052 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 23:42:06.281059 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 23:42:06.281065 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 23:42:06.281075 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 23:42:06.281081 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 23:42:06.281088 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 23:42:06.281094 systemd[1]: Reached target slices.target - Slice Units. Sep 9 23:42:06.281100 systemd[1]: Reached target swap.target - Swaps. Sep 9 23:42:06.281106 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 23:42:06.281113 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 23:42:06.281120 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 23:42:06.281127 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 23:42:06.281133 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 23:42:06.281139 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 23:42:06.281146 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 23:42:06.281152 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 23:42:06.281159 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 23:42:06.281166 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 23:42:06.281172 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 23:42:06.281178 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 23:42:06.281184 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 23:42:06.281191 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 23:42:06.281198 systemd[1]: Reached target machines.target - Containers. Sep 9 23:42:06.281204 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 23:42:06.281212 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:42:06.281219 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 23:42:06.281225 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 23:42:06.281232 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:42:06.281238 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 23:42:06.281245 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 23:42:06.281251 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 23:42:06.281257 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 23:42:06.281264 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 23:42:06.281271 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 23:42:06.281277 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 23:42:06.281284 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 23:42:06.281290 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 23:42:06.281296 kernel: fuse: init (API version 7.41) Sep 9 23:42:06.281303 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:42:06.281309 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 23:42:06.281315 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 23:42:06.281322 kernel: loop: module loaded Sep 9 23:42:06.281329 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 23:42:06.281335 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 23:42:06.281341 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 23:42:06.281348 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 23:42:06.281355 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 23:42:06.281361 systemd[1]: Stopped verity-setup.service. Sep 9 23:42:06.281367 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 23:42:06.281403 systemd-journald[1381]: Collecting audit messages is disabled. Sep 9 23:42:06.281421 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 23:42:06.281428 systemd-journald[1381]: Journal started Sep 9 23:42:06.281445 systemd-journald[1381]: Runtime Journal (/run/log/journal/edd8741ca03248d592af1fcd6a821a82) is 8M, max 78.5M, 70.5M free. Sep 9 23:42:05.424638 systemd[1]: Queued start job for default target multi-user.target. Sep 9 23:42:05.429506 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 9 23:42:05.429942 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 23:42:05.430240 systemd[1]: systemd-journald.service: Consumed 2.488s CPU time. Sep 9 23:42:06.293825 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 23:42:06.294534 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 23:42:06.301110 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 23:42:06.302813 kernel: ACPI: bus type drm_connector registered Sep 9 23:42:06.306699 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 23:42:06.311469 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 23:42:06.319840 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 23:42:06.325845 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 23:42:06.331537 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 23:42:06.331698 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 23:42:06.338361 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:42:06.338532 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:42:06.343348 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 23:42:06.343485 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 23:42:06.348212 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 23:42:06.348344 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 23:42:06.353714 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 23:42:06.354076 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 23:42:06.358906 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 23:42:06.359035 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 23:42:06.364940 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 23:42:06.370467 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 23:42:06.375769 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 23:42:06.381368 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 23:42:06.393865 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 23:42:06.405266 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 23:42:06.411216 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 23:42:06.425262 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 23:42:06.430468 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 23:42:06.430575 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 23:42:06.436276 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 23:42:06.442699 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 23:42:06.447836 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:42:06.468021 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 23:42:06.475756 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 23:42:06.482467 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 23:42:06.484249 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 23:42:06.495495 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 23:42:06.496789 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:42:06.505007 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 23:42:06.511925 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 23:42:06.521227 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 23:42:06.527603 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 23:42:06.534851 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 23:42:06.543118 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 23:42:06.552357 systemd-journald[1381]: Time spent on flushing to /var/log/journal/edd8741ca03248d592af1fcd6a821a82 is 52.920ms for 946 entries. Sep 9 23:42:06.552357 systemd-journald[1381]: System Journal (/var/log/journal/edd8741ca03248d592af1fcd6a821a82) is 11.8M, max 2.6G, 2.6G free. Sep 9 23:42:06.712637 systemd-journald[1381]: Received client request to flush runtime journal. Sep 9 23:42:06.712690 systemd-journald[1381]: /var/log/journal/edd8741ca03248d592af1fcd6a821a82/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Sep 9 23:42:06.712709 systemd-journald[1381]: Rotating system journal. Sep 9 23:42:06.712726 kernel: loop0: detected capacity change from 0 to 203944 Sep 9 23:42:06.558352 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 23:42:06.668226 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:42:06.713755 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 23:42:06.725887 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 23:42:06.731446 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 23:42:06.732384 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 23:42:06.782816 kernel: loop1: detected capacity change from 0 to 100608 Sep 9 23:42:07.311564 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 23:42:07.317328 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 23:42:07.341828 kernel: loop2: detected capacity change from 0 to 119320 Sep 9 23:42:07.517091 systemd-tmpfiles[1456]: ACLs are not supported, ignoring. Sep 9 23:42:07.517106 systemd-tmpfiles[1456]: ACLs are not supported, ignoring. Sep 9 23:42:07.521024 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 23:42:07.852826 kernel: loop3: detected capacity change from 0 to 29264 Sep 9 23:42:08.380918 kernel: loop4: detected capacity change from 0 to 203944 Sep 9 23:42:08.395849 kernel: loop5: detected capacity change from 0 to 100608 Sep 9 23:42:08.408835 kernel: loop6: detected capacity change from 0 to 119320 Sep 9 23:42:08.428811 kernel: loop7: detected capacity change from 0 to 29264 Sep 9 23:42:08.437634 (sd-merge)[1461]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Sep 9 23:42:08.438071 (sd-merge)[1461]: Merged extensions into '/usr'. Sep 9 23:42:08.441934 systemd[1]: Reload requested from client PID 1437 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 23:42:08.441949 systemd[1]: Reloading... Sep 9 23:42:08.505819 zram_generator::config[1489]: No configuration found. Sep 9 23:42:08.704215 systemd[1]: Reloading finished in 261 ms. Sep 9 23:42:08.721033 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 23:42:08.726623 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 23:42:08.742987 systemd[1]: Starting ensure-sysext.service... Sep 9 23:42:08.747388 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 23:42:08.753982 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 23:42:08.783688 systemd[1]: Reload requested from client PID 1543 ('systemctl') (unit ensure-sysext.service)... Sep 9 23:42:08.783702 systemd[1]: Reloading... Sep 9 23:42:08.788611 systemd-udevd[1545]: Using default interface naming scheme 'v255'. Sep 9 23:42:08.806189 systemd-tmpfiles[1544]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 23:42:08.806217 systemd-tmpfiles[1544]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 23:42:08.806475 systemd-tmpfiles[1544]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 23:42:08.806623 systemd-tmpfiles[1544]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 23:42:08.807096 systemd-tmpfiles[1544]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 23:42:08.807237 systemd-tmpfiles[1544]: ACLs are not supported, ignoring. Sep 9 23:42:08.807265 systemd-tmpfiles[1544]: ACLs are not supported, ignoring. Sep 9 23:42:08.840827 zram_generator::config[1573]: No configuration found. Sep 9 23:42:08.855745 systemd-tmpfiles[1544]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 23:42:08.855762 systemd-tmpfiles[1544]: Skipping /boot Sep 9 23:42:08.862049 systemd-tmpfiles[1544]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 23:42:08.862061 systemd-tmpfiles[1544]: Skipping /boot Sep 9 23:42:08.981747 systemd[1]: Reloading finished in 197 ms. Sep 9 23:42:08.997780 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 23:42:09.007938 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 23:42:09.045964 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 23:42:09.060098 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 23:42:09.069570 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 23:42:09.082000 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 23:42:09.091122 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:42:09.099044 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:42:09.106007 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 23:42:09.117214 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 23:42:09.126175 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:42:09.126311 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:42:09.127154 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:42:09.127327 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:42:09.132282 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 23:42:09.132427 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 23:42:09.137810 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 23:42:09.138531 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 23:42:09.147027 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 23:42:09.147176 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 23:42:09.149423 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 23:42:09.156475 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 23:42:09.164946 systemd[1]: Expecting device dev-ptp_hyperv.device - /dev/ptp_hyperv... Sep 9 23:42:09.169680 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:42:09.171002 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:42:09.177003 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 23:42:09.183748 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 23:42:09.193329 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 23:42:09.198865 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:42:09.198992 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:42:09.199107 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 23:42:09.205472 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:42:09.205722 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:42:09.213581 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 23:42:09.213953 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 23:42:09.221154 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 23:42:09.221321 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 23:42:09.227409 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 23:42:09.227559 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 23:42:09.239168 systemd[1]: Finished ensure-sysext.service. Sep 9 23:42:09.243899 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 23:42:09.249660 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 23:42:09.249716 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 23:42:09.350351 systemd-resolved[1633]: Positive Trust Anchors: Sep 9 23:42:09.350372 systemd-resolved[1633]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 23:42:09.350392 systemd-resolved[1633]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 23:42:09.401463 systemd-resolved[1633]: Using system hostname 'ci-4426.0.0-n-3e4141976f'. Sep 9 23:42:09.402707 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 23:42:09.408123 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 23:42:09.460132 augenrules[1677]: No rules Sep 9 23:42:09.462264 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 23:42:09.462499 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 23:42:09.511924 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 23:42:09.594098 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 23:42:09.604554 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 23:42:09.670240 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 9 23:42:09.775834 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#247 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 9 23:42:09.790172 systemd-networkd[1687]: lo: Link UP Sep 9 23:42:09.790188 systemd-networkd[1687]: lo: Gained carrier Sep 9 23:42:09.791416 systemd-networkd[1687]: Enumeration completed Sep 9 23:42:09.791526 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 23:42:09.792448 systemd-networkd[1687]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:42:09.792452 systemd-networkd[1687]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 23:42:09.800976 systemd[1]: Reached target network.target - Network. Sep 9 23:42:09.808816 kernel: mousedev: PS/2 mouse device common for all mice Sep 9 23:42:09.808928 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 23:42:09.819563 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 23:42:09.851510 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Sep 9 23:42:09.870707 kernel: hv_vmbus: registering driver hv_balloon Sep 9 23:42:09.870815 kernel: hv_vmbus: registering driver hyperv_fb Sep 9 23:42:09.870831 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 9 23:42:09.875396 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 9 23:42:09.880231 kernel: Console: switching to colour dummy device 80x25 Sep 9 23:42:09.881815 kernel: mlx5_core cd95:00:02.0 enP52629s1: Link up Sep 9 23:42:09.882043 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 9 23:42:09.893855 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 9 23:42:09.899064 kernel: hv_balloon: Memory hot add disabled on ARM64 Sep 9 23:42:09.899143 kernel: Console: switching to colour frame buffer device 128x48 Sep 9 23:42:09.915824 kernel: hv_netvsc 00224878-f7f0-0022-4878-f7f000224878 eth0: Data path switched to VF: enP52629s1 Sep 9 23:42:09.916179 systemd-networkd[1687]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:42:09.916395 systemd-networkd[1687]: enP52629s1: Link UP Sep 9 23:42:09.916546 systemd-networkd[1687]: eth0: Link UP Sep 9 23:42:09.916550 systemd-networkd[1687]: eth0: Gained carrier Sep 9 23:42:09.916564 systemd-networkd[1687]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:42:09.920014 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:42:09.935868 systemd-networkd[1687]: enP52629s1: Gained carrier Sep 9 23:42:09.940693 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 23:42:09.947275 systemd-networkd[1687]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 9 23:42:09.959308 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 23:42:09.959527 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:42:09.968341 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:42:10.053596 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 9 23:42:10.060320 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 23:42:10.166826 kernel: MACsec IEEE 802.1AE Sep 9 23:42:10.176009 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 23:42:11.396880 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:42:11.406899 systemd-networkd[1687]: eth0: Gained IPv6LL Sep 9 23:42:11.409868 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 23:42:11.415186 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 23:42:11.983550 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 23:42:11.989127 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 23:42:17.618623 ldconfig[1432]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 23:42:17.634791 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 23:42:17.641261 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 23:42:17.671811 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 23:42:17.676563 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 23:42:17.680868 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 23:42:17.685716 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 23:42:17.690531 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 23:42:17.695050 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 23:42:17.700161 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 23:42:17.704671 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 23:42:17.704705 systemd[1]: Reached target paths.target - Path Units. Sep 9 23:42:17.708285 systemd[1]: Reached target timers.target - Timer Units. Sep 9 23:42:17.740888 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 23:42:17.746787 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 23:42:17.752614 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 23:42:17.758442 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 23:42:17.763485 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 23:42:17.769329 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 23:42:17.773966 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 23:42:17.779342 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 23:42:17.783535 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 23:42:17.787665 systemd[1]: Reached target basic.target - Basic System. Sep 9 23:42:17.791232 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 23:42:17.791265 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 23:42:17.806095 systemd[1]: Starting chronyd.service - NTP client/server... Sep 9 23:42:17.820934 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 23:42:17.826155 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 9 23:42:17.834025 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 23:42:17.846055 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 23:42:17.855157 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 23:42:17.864033 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 23:42:17.868368 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 23:42:17.869960 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Sep 9 23:42:17.874286 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Sep 9 23:42:17.875610 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:42:17.882020 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 23:42:17.888302 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 23:42:17.889822 jq[1838]: false Sep 9 23:42:17.896049 KVP[1840]: KVP starting; pid is:1840 Sep 9 23:42:17.897973 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 23:42:17.903420 chronyd[1830]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Sep 9 23:42:17.906387 KVP[1840]: KVP LIC Version: 3.1 Sep 9 23:42:17.907835 kernel: hv_utils: KVP IC version 4.0 Sep 9 23:42:17.909085 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 23:42:17.918013 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 23:42:17.928306 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 23:42:17.934709 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 23:42:17.935303 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 23:42:17.936918 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 23:42:17.938157 extend-filesystems[1839]: Found /dev/sda6 Sep 9 23:42:17.943971 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 23:42:17.956454 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 23:42:17.959108 jq[1855]: true Sep 9 23:42:17.962205 chronyd[1830]: Timezone right/UTC failed leap second check, ignoring Sep 9 23:42:17.962482 chronyd[1830]: Loaded seccomp filter (level 2) Sep 9 23:42:17.965297 systemd[1]: Started chronyd.service - NTP client/server. Sep 9 23:42:17.969588 extend-filesystems[1839]: Found /dev/sda9 Sep 9 23:42:17.980510 extend-filesystems[1839]: Checking size of /dev/sda9 Sep 9 23:42:17.971108 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 23:42:17.971290 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 23:42:17.977550 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 23:42:17.978012 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 23:42:17.992183 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 23:42:17.994351 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 23:42:18.015104 (ntainerd)[1869]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 23:42:18.017455 jq[1868]: true Sep 9 23:42:18.036458 extend-filesystems[1839]: Old size kept for /dev/sda9 Sep 9 23:42:18.042942 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 23:42:18.043136 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 23:42:18.062424 update_engine[1853]: I20250909 23:42:18.059074 1853 main.cc:92] Flatcar Update Engine starting Sep 9 23:42:18.068870 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 23:42:18.080826 tar[1864]: linux-arm64/helm Sep 9 23:42:18.102211 systemd-logind[1852]: New seat seat0. Sep 9 23:42:18.129074 systemd-logind[1852]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Sep 9 23:42:18.129389 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 23:42:18.183435 bash[1906]: Updated "/home/core/.ssh/authorized_keys" Sep 9 23:42:18.188399 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 23:42:18.197855 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 23:42:18.384758 dbus-daemon[1833]: [system] SELinux support is enabled Sep 9 23:42:18.385031 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 23:42:18.392743 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 23:42:18.392769 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 23:42:18.394174 dbus-daemon[1833]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 9 23:42:18.399148 update_engine[1853]: I20250909 23:42:18.399092 1853 update_check_scheduler.cc:74] Next update check in 11m42s Sep 9 23:42:18.401167 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 23:42:18.401193 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 23:42:18.410712 systemd[1]: Started update-engine.service - Update Engine. Sep 9 23:42:18.420061 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 23:42:18.488854 coreos-metadata[1832]: Sep 09 23:42:18.488 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 9 23:42:18.493237 coreos-metadata[1832]: Sep 09 23:42:18.493 INFO Fetch successful Sep 9 23:42:18.493541 coreos-metadata[1832]: Sep 09 23:42:18.493 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 9 23:42:18.500002 coreos-metadata[1832]: Sep 09 23:42:18.498 INFO Fetch successful Sep 9 23:42:18.500002 coreos-metadata[1832]: Sep 09 23:42:18.499 INFO Fetching http://168.63.129.16/machine/cf1edef9-0d68-4521-8c51-92afef45221d/eaf931fd%2D44bb%2D403d%2Db6a0%2Dd494574fbd69.%5Fci%2D4426.0.0%2Dn%2D3e4141976f?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 9 23:42:18.501702 coreos-metadata[1832]: Sep 09 23:42:18.501 INFO Fetch successful Sep 9 23:42:18.502347 coreos-metadata[1832]: Sep 09 23:42:18.502 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 9 23:42:18.512804 coreos-metadata[1832]: Sep 09 23:42:18.512 INFO Fetch successful Sep 9 23:42:18.552867 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 9 23:42:18.566104 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 23:42:18.589356 tar[1864]: linux-arm64/LICENSE Sep 9 23:42:18.589933 tar[1864]: linux-arm64/README.md Sep 9 23:42:18.606494 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 23:42:18.669731 locksmithd[1977]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 23:42:18.718850 sshd_keygen[1879]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 23:42:18.736241 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 23:42:18.744130 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 23:42:18.752021 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Sep 9 23:42:18.769421 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 23:42:18.770909 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 23:42:18.784610 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 23:42:18.794035 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Sep 9 23:42:18.827395 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 23:42:18.837839 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 23:42:18.846029 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 9 23:42:18.851673 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 23:42:18.884457 containerd[1869]: time="2025-09-09T23:42:18Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 23:42:18.885457 containerd[1869]: time="2025-09-09T23:42:18.885415228Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 9 23:42:18.895826 containerd[1869]: time="2025-09-09T23:42:18.895549092Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.888µs" Sep 9 23:42:18.895826 containerd[1869]: time="2025-09-09T23:42:18.895590068Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 23:42:18.895826 containerd[1869]: time="2025-09-09T23:42:18.895605412Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 23:42:18.895826 containerd[1869]: time="2025-09-09T23:42:18.895767084Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 23:42:18.895826 containerd[1869]: time="2025-09-09T23:42:18.895777612Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 23:42:18.895986 containerd[1869]: time="2025-09-09T23:42:18.895969684Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 23:42:18.896114 containerd[1869]: time="2025-09-09T23:42:18.896098876Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 23:42:18.896158 containerd[1869]: time="2025-09-09T23:42:18.896150012Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 23:42:18.896443 containerd[1869]: time="2025-09-09T23:42:18.896422860Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 23:42:18.896500 containerd[1869]: time="2025-09-09T23:42:18.896488300Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 23:42:18.896550 containerd[1869]: time="2025-09-09T23:42:18.896540932Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 23:42:18.896598 containerd[1869]: time="2025-09-09T23:42:18.896585636Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 23:42:18.896728 containerd[1869]: time="2025-09-09T23:42:18.896712036Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 23:42:18.896995 containerd[1869]: time="2025-09-09T23:42:18.896972276Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 23:42:18.897074 containerd[1869]: time="2025-09-09T23:42:18.897062876Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 23:42:18.897131 containerd[1869]: time="2025-09-09T23:42:18.897119268Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 23:42:18.897213 containerd[1869]: time="2025-09-09T23:42:18.897199716Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 23:42:18.897474 containerd[1869]: time="2025-09-09T23:42:18.897455852Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 23:42:18.897597 containerd[1869]: time="2025-09-09T23:42:18.897583068Z" level=info msg="metadata content store policy set" policy=shared Sep 9 23:42:18.901624 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:42:18.910318 (kubelet)[2025]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:42:18.917906 containerd[1869]: time="2025-09-09T23:42:18.917854460Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 23:42:18.918014 containerd[1869]: time="2025-09-09T23:42:18.917936796Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 23:42:18.918014 containerd[1869]: time="2025-09-09T23:42:18.917952028Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 23:42:18.918014 containerd[1869]: time="2025-09-09T23:42:18.917960588Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 23:42:18.918014 containerd[1869]: time="2025-09-09T23:42:18.917969756Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 23:42:18.918014 containerd[1869]: time="2025-09-09T23:42:18.917989900Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 23:42:18.918014 containerd[1869]: time="2025-09-09T23:42:18.917998756Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 23:42:18.918014 containerd[1869]: time="2025-09-09T23:42:18.918006132Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 23:42:18.918117 containerd[1869]: time="2025-09-09T23:42:18.918020604Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 23:42:18.918117 containerd[1869]: time="2025-09-09T23:42:18.918033772Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 23:42:18.918117 containerd[1869]: time="2025-09-09T23:42:18.918040228Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 23:42:18.918117 containerd[1869]: time="2025-09-09T23:42:18.918049932Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 23:42:18.918235 containerd[1869]: time="2025-09-09T23:42:18.918211908Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 23:42:18.918235 containerd[1869]: time="2025-09-09T23:42:18.918235196Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 23:42:18.918280 containerd[1869]: time="2025-09-09T23:42:18.918248052Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 23:42:18.918280 containerd[1869]: time="2025-09-09T23:42:18.918255260Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 23:42:18.918280 containerd[1869]: time="2025-09-09T23:42:18.918264060Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 23:42:18.918280 containerd[1869]: time="2025-09-09T23:42:18.918271588Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 23:42:18.918280 containerd[1869]: time="2025-09-09T23:42:18.918279044Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 23:42:18.918341 containerd[1869]: time="2025-09-09T23:42:18.918294516Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 23:42:18.918341 containerd[1869]: time="2025-09-09T23:42:18.918302140Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 23:42:18.918341 containerd[1869]: time="2025-09-09T23:42:18.918313868Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 23:42:18.918341 containerd[1869]: time="2025-09-09T23:42:18.918320692Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 23:42:18.918403 containerd[1869]: time="2025-09-09T23:42:18.918385388Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 23:42:18.918420 containerd[1869]: time="2025-09-09T23:42:18.918404676Z" level=info msg="Start snapshots syncer" Sep 9 23:42:18.918437 containerd[1869]: time="2025-09-09T23:42:18.918422084Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 23:42:18.918649 containerd[1869]: time="2025-09-09T23:42:18.918606244Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 23:42:18.918738 containerd[1869]: time="2025-09-09T23:42:18.918658156Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 23:42:18.918738 containerd[1869]: time="2025-09-09T23:42:18.918714020Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 23:42:18.919501 containerd[1869]: time="2025-09-09T23:42:18.918861292Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 23:42:18.919501 containerd[1869]: time="2025-09-09T23:42:18.918881692Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 23:42:18.919501 containerd[1869]: time="2025-09-09T23:42:18.918888596Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 23:42:18.919501 containerd[1869]: time="2025-09-09T23:42:18.918898868Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 23:42:18.919501 containerd[1869]: time="2025-09-09T23:42:18.918907860Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 23:42:18.919501 containerd[1869]: time="2025-09-09T23:42:18.918914748Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 23:42:18.919501 containerd[1869]: time="2025-09-09T23:42:18.918921508Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 23:42:18.919501 containerd[1869]: time="2025-09-09T23:42:18.918942988Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 23:42:18.919501 containerd[1869]: time="2025-09-09T23:42:18.918966244Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 23:42:18.919501 containerd[1869]: time="2025-09-09T23:42:18.918973796Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 23:42:18.919501 containerd[1869]: time="2025-09-09T23:42:18.919007700Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 23:42:18.919501 containerd[1869]: time="2025-09-09T23:42:18.919019428Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 23:42:18.919501 containerd[1869]: time="2025-09-09T23:42:18.919025260Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 23:42:18.919739 containerd[1869]: time="2025-09-09T23:42:18.919048004Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 23:42:18.919739 containerd[1869]: time="2025-09-09T23:42:18.919054220Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 23:42:18.919739 containerd[1869]: time="2025-09-09T23:42:18.919062876Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 23:42:18.919739 containerd[1869]: time="2025-09-09T23:42:18.919069588Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 23:42:18.919739 containerd[1869]: time="2025-09-09T23:42:18.919082532Z" level=info msg="runtime interface created" Sep 9 23:42:18.919739 containerd[1869]: time="2025-09-09T23:42:18.919085812Z" level=info msg="created NRI interface" Sep 9 23:42:18.919739 containerd[1869]: time="2025-09-09T23:42:18.919091204Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 23:42:18.919739 containerd[1869]: time="2025-09-09T23:42:18.919099796Z" level=info msg="Connect containerd service" Sep 9 23:42:18.919739 containerd[1869]: time="2025-09-09T23:42:18.919144484Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 23:42:18.920015 containerd[1869]: time="2025-09-09T23:42:18.919881908Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 23:42:19.192979 containerd[1869]: time="2025-09-09T23:42:19.192380900Z" level=info msg="Start subscribing containerd event" Sep 9 23:42:19.192979 containerd[1869]: time="2025-09-09T23:42:19.192456116Z" level=info msg="Start recovering state" Sep 9 23:42:19.192979 containerd[1869]: time="2025-09-09T23:42:19.192537196Z" level=info msg="Start event monitor" Sep 9 23:42:19.192979 containerd[1869]: time="2025-09-09T23:42:19.192548084Z" level=info msg="Start cni network conf syncer for default" Sep 9 23:42:19.192979 containerd[1869]: time="2025-09-09T23:42:19.192546324Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 23:42:19.192979 containerd[1869]: time="2025-09-09T23:42:19.192556268Z" level=info msg="Start streaming server" Sep 9 23:42:19.192979 containerd[1869]: time="2025-09-09T23:42:19.192584364Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 23:42:19.192979 containerd[1869]: time="2025-09-09T23:42:19.192590684Z" level=info msg="runtime interface starting up..." Sep 9 23:42:19.192979 containerd[1869]: time="2025-09-09T23:42:19.192594708Z" level=info msg="starting plugins..." Sep 9 23:42:19.192979 containerd[1869]: time="2025-09-09T23:42:19.192610676Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 23:42:19.192979 containerd[1869]: time="2025-09-09T23:42:19.192595484Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 23:42:19.192979 containerd[1869]: time="2025-09-09T23:42:19.192742572Z" level=info msg="containerd successfully booted in 0.308634s" Sep 9 23:42:19.192951 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 23:42:19.201616 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 23:42:19.206872 systemd[1]: Startup finished in 1.611s (kernel) + 15.292s (initrd) + 20.430s (userspace) = 37.335s. Sep 9 23:42:19.289962 kubelet[2025]: E0909 23:42:19.289908 2025 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:42:19.291992 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:42:19.292110 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:42:19.292495 systemd[1]: kubelet.service: Consumed 561ms CPU time, 255.2M memory peak. Sep 9 23:42:20.443150 login[2016]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Sep 9 23:42:20.444144 login[2017]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:42:20.455838 systemd-logind[1852]: New session 1 of user core. Sep 9 23:42:20.456205 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 23:42:20.458002 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 23:42:20.493946 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 23:42:20.497995 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 23:42:20.535464 (systemd)[2050]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 23:42:20.539423 systemd-logind[1852]: New session c1 of user core. Sep 9 23:42:20.834914 systemd[2050]: Queued start job for default target default.target. Sep 9 23:42:20.843748 systemd[2050]: Created slice app.slice - User Application Slice. Sep 9 23:42:20.843960 systemd[2050]: Reached target paths.target - Paths. Sep 9 23:42:20.844078 systemd[2050]: Reached target timers.target - Timers. Sep 9 23:42:20.845414 systemd[2050]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 23:42:20.853553 systemd[2050]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 23:42:20.853607 systemd[2050]: Reached target sockets.target - Sockets. Sep 9 23:42:20.853737 systemd[2050]: Reached target basic.target - Basic System. Sep 9 23:42:20.853964 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 23:42:20.854493 systemd[2050]: Reached target default.target - Main User Target. Sep 9 23:42:20.854531 systemd[2050]: Startup finished in 307ms. Sep 9 23:42:20.858935 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 23:42:21.443903 login[2016]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:42:21.448472 systemd-logind[1852]: New session 2 of user core. Sep 9 23:42:21.452938 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 23:42:22.012851 waagent[2013]: 2025-09-09T23:42:22.012736Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Sep 9 23:42:22.017666 waagent[2013]: 2025-09-09T23:42:22.017605Z INFO Daemon Daemon OS: flatcar 4426.0.0 Sep 9 23:42:22.021012 waagent[2013]: 2025-09-09T23:42:22.020965Z INFO Daemon Daemon Python: 3.11.13 Sep 9 23:42:22.024491 waagent[2013]: 2025-09-09T23:42:22.024446Z INFO Daemon Daemon Run daemon Sep 9 23:42:22.027428 waagent[2013]: 2025-09-09T23:42:22.027392Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4426.0.0' Sep 9 23:42:22.034017 waagent[2013]: 2025-09-09T23:42:22.033976Z INFO Daemon Daemon Using waagent for provisioning Sep 9 23:42:22.038549 waagent[2013]: 2025-09-09T23:42:22.038508Z INFO Daemon Daemon Activate resource disk Sep 9 23:42:22.048031 waagent[2013]: 2025-09-09T23:42:22.042043Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 9 23:42:22.050390 waagent[2013]: 2025-09-09T23:42:22.050341Z INFO Daemon Daemon Found device: None Sep 9 23:42:22.054552 waagent[2013]: 2025-09-09T23:42:22.054516Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 9 23:42:22.061448 waagent[2013]: 2025-09-09T23:42:22.061415Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 9 23:42:22.070372 waagent[2013]: 2025-09-09T23:42:22.070325Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 9 23:42:22.075265 waagent[2013]: 2025-09-09T23:42:22.075229Z INFO Daemon Daemon Running default provisioning handler Sep 9 23:42:22.085476 waagent[2013]: 2025-09-09T23:42:22.085431Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Sep 9 23:42:22.095868 waagent[2013]: 2025-09-09T23:42:22.095773Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 9 23:42:22.104311 waagent[2013]: 2025-09-09T23:42:22.104270Z INFO Daemon Daemon cloud-init is enabled: False Sep 9 23:42:22.108700 waagent[2013]: 2025-09-09T23:42:22.108670Z INFO Daemon Daemon Copying ovf-env.xml Sep 9 23:42:22.224985 waagent[2013]: 2025-09-09T23:42:22.224906Z INFO Daemon Daemon Successfully mounted dvd Sep 9 23:42:22.252009 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 9 23:42:22.258382 waagent[2013]: 2025-09-09T23:42:22.254220Z INFO Daemon Daemon Detect protocol endpoint Sep 9 23:42:22.258703 waagent[2013]: 2025-09-09T23:42:22.258657Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 9 23:42:22.263224 waagent[2013]: 2025-09-09T23:42:22.263096Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 9 23:42:22.268524 waagent[2013]: 2025-09-09T23:42:22.268464Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 9 23:42:22.272773 waagent[2013]: 2025-09-09T23:42:22.272721Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 9 23:42:22.276521 waagent[2013]: 2025-09-09T23:42:22.276423Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 9 23:42:22.323165 waagent[2013]: 2025-09-09T23:42:22.323113Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 9 23:42:22.329087 waagent[2013]: 2025-09-09T23:42:22.329058Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 9 23:42:22.333425 waagent[2013]: 2025-09-09T23:42:22.333380Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 9 23:42:22.464524 waagent[2013]: 2025-09-09T23:42:22.464447Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 9 23:42:22.469858 waagent[2013]: 2025-09-09T23:42:22.469785Z INFO Daemon Daemon Forcing an update of the goal state. Sep 9 23:42:22.477590 waagent[2013]: 2025-09-09T23:42:22.477533Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 9 23:42:22.517049 waagent[2013]: 2025-09-09T23:42:22.516934Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Sep 9 23:42:22.521542 waagent[2013]: 2025-09-09T23:42:22.521497Z INFO Daemon Sep 9 23:42:22.523712 waagent[2013]: 2025-09-09T23:42:22.523671Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 6a9e1866-8a90-4e64-bcd1-65fd84ebebdd eTag: 15621894538815357830 source: Fabric] Sep 9 23:42:22.532100 waagent[2013]: 2025-09-09T23:42:22.532054Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Sep 9 23:42:22.536788 waagent[2013]: 2025-09-09T23:42:22.536748Z INFO Daemon Sep 9 23:42:22.538869 waagent[2013]: 2025-09-09T23:42:22.538832Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Sep 9 23:42:22.548114 waagent[2013]: 2025-09-09T23:42:22.548075Z INFO Daemon Daemon Downloading artifacts profile blob Sep 9 23:42:22.616305 waagent[2013]: 2025-09-09T23:42:22.616240Z INFO Daemon Downloaded certificate {'thumbprint': '3742567116801F9A17A89A319E047EA3A0D2BC68', 'hasPrivateKey': True} Sep 9 23:42:22.624468 waagent[2013]: 2025-09-09T23:42:22.624417Z INFO Daemon Fetch goal state completed Sep 9 23:42:22.677897 waagent[2013]: 2025-09-09T23:42:22.677846Z INFO Daemon Daemon Starting provisioning Sep 9 23:42:22.682011 waagent[2013]: 2025-09-09T23:42:22.681958Z INFO Daemon Daemon Handle ovf-env.xml. Sep 9 23:42:22.685332 waagent[2013]: 2025-09-09T23:42:22.685293Z INFO Daemon Daemon Set hostname [ci-4426.0.0-n-3e4141976f] Sep 9 23:42:22.732207 waagent[2013]: 2025-09-09T23:42:22.732146Z INFO Daemon Daemon Publish hostname [ci-4426.0.0-n-3e4141976f] Sep 9 23:42:22.737320 waagent[2013]: 2025-09-09T23:42:22.737250Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 9 23:42:22.742111 waagent[2013]: 2025-09-09T23:42:22.742059Z INFO Daemon Daemon Primary interface is [eth0] Sep 9 23:42:22.752148 systemd-networkd[1687]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:42:22.752417 systemd-networkd[1687]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 23:42:22.752486 systemd-networkd[1687]: eth0: DHCP lease lost Sep 9 23:42:22.753121 waagent[2013]: 2025-09-09T23:42:22.753071Z INFO Daemon Daemon Create user account if not exists Sep 9 23:42:22.757683 waagent[2013]: 2025-09-09T23:42:22.757633Z INFO Daemon Daemon User core already exists, skip useradd Sep 9 23:42:22.762401 waagent[2013]: 2025-09-09T23:42:22.762347Z INFO Daemon Daemon Configure sudoer Sep 9 23:42:22.770973 waagent[2013]: 2025-09-09T23:42:22.770879Z INFO Daemon Daemon Configure sshd Sep 9 23:42:22.778627 waagent[2013]: 2025-09-09T23:42:22.778564Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Sep 9 23:42:22.787343 waagent[2013]: 2025-09-09T23:42:22.787276Z INFO Daemon Daemon Deploy ssh public key. Sep 9 23:42:22.790585 systemd-networkd[1687]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 9 23:42:23.917072 waagent[2013]: 2025-09-09T23:42:23.917022Z INFO Daemon Daemon Provisioning complete Sep 9 23:42:23.931077 waagent[2013]: 2025-09-09T23:42:23.931033Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 9 23:42:23.935933 waagent[2013]: 2025-09-09T23:42:23.935898Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 9 23:42:23.943577 waagent[2013]: 2025-09-09T23:42:23.943543Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Sep 9 23:42:24.047831 waagent[2103]: 2025-09-09T23:42:24.047047Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Sep 9 23:42:24.047831 waagent[2103]: 2025-09-09T23:42:24.047179Z INFO ExtHandler ExtHandler OS: flatcar 4426.0.0 Sep 9 23:42:24.047831 waagent[2103]: 2025-09-09T23:42:24.047216Z INFO ExtHandler ExtHandler Python: 3.11.13 Sep 9 23:42:24.047831 waagent[2103]: 2025-09-09T23:42:24.047250Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Sep 9 23:42:24.151929 waagent[2103]: 2025-09-09T23:42:24.151859Z INFO ExtHandler ExtHandler Distro: flatcar-4426.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Sep 9 23:42:24.152276 waagent[2103]: 2025-09-09T23:42:24.152243Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 9 23:42:24.152394 waagent[2103]: 2025-09-09T23:42:24.152371Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 9 23:42:24.159097 waagent[2103]: 2025-09-09T23:42:24.159040Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 9 23:42:24.165637 waagent[2103]: 2025-09-09T23:42:24.165592Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 9 23:42:24.166276 waagent[2103]: 2025-09-09T23:42:24.166239Z INFO ExtHandler Sep 9 23:42:24.166415 waagent[2103]: 2025-09-09T23:42:24.166391Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 029ee931-fbbc-4daf-8ec8-a6c223bea6e6 eTag: 15621894538815357830 source: Fabric] Sep 9 23:42:24.166738 waagent[2103]: 2025-09-09T23:42:24.166708Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 9 23:42:24.167324 waagent[2103]: 2025-09-09T23:42:24.167253Z INFO ExtHandler Sep 9 23:42:24.167446 waagent[2103]: 2025-09-09T23:42:24.167421Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 9 23:42:24.172703 waagent[2103]: 2025-09-09T23:42:24.172668Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 9 23:42:24.232843 waagent[2103]: 2025-09-09T23:42:24.232743Z INFO ExtHandler Downloaded certificate {'thumbprint': '3742567116801F9A17A89A319E047EA3A0D2BC68', 'hasPrivateKey': True} Sep 9 23:42:24.233287 waagent[2103]: 2025-09-09T23:42:24.233244Z INFO ExtHandler Fetch goal state completed Sep 9 23:42:24.247371 waagent[2103]: 2025-09-09T23:42:24.247306Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.1 11 Feb 2025 (Library: OpenSSL 3.4.1 11 Feb 2025) Sep 9 23:42:24.251038 waagent[2103]: 2025-09-09T23:42:24.250981Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2103 Sep 9 23:42:24.251152 waagent[2103]: 2025-09-09T23:42:24.251125Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Sep 9 23:42:24.251407 waagent[2103]: 2025-09-09T23:42:24.251377Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Sep 9 23:42:24.252561 waagent[2103]: 2025-09-09T23:42:24.252520Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4426.0.0', '', 'Flatcar Container Linux by Kinvolk'] Sep 9 23:42:24.252937 waagent[2103]: 2025-09-09T23:42:24.252902Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4426.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Sep 9 23:42:24.253068 waagent[2103]: 2025-09-09T23:42:24.253043Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Sep 9 23:42:24.253501 waagent[2103]: 2025-09-09T23:42:24.253467Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 9 23:42:24.333515 waagent[2103]: 2025-09-09T23:42:24.333472Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 9 23:42:24.333695 waagent[2103]: 2025-09-09T23:42:24.333665Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 9 23:42:24.338726 waagent[2103]: 2025-09-09T23:42:24.338309Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 9 23:42:24.343279 systemd[1]: Reload requested from client PID 2118 ('systemctl') (unit waagent.service)... Sep 9 23:42:24.343294 systemd[1]: Reloading... Sep 9 23:42:24.421826 zram_generator::config[2157]: No configuration found. Sep 9 23:42:24.579418 systemd[1]: Reloading finished in 235 ms. Sep 9 23:42:24.590987 waagent[2103]: 2025-09-09T23:42:24.590005Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Sep 9 23:42:24.590987 waagent[2103]: 2025-09-09T23:42:24.590156Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Sep 9 23:42:25.183185 waagent[2103]: 2025-09-09T23:42:25.182272Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Sep 9 23:42:25.183185 waagent[2103]: 2025-09-09T23:42:25.182608Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Sep 9 23:42:25.183539 waagent[2103]: 2025-09-09T23:42:25.183409Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 9 23:42:25.183539 waagent[2103]: 2025-09-09T23:42:25.183485Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 9 23:42:25.183673 waagent[2103]: 2025-09-09T23:42:25.183635Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 9 23:42:25.183770 waagent[2103]: 2025-09-09T23:42:25.183723Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 9 23:42:25.183906 waagent[2103]: 2025-09-09T23:42:25.183874Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 9 23:42:25.183906 waagent[2103]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 9 23:42:25.183906 waagent[2103]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Sep 9 23:42:25.183906 waagent[2103]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 9 23:42:25.183906 waagent[2103]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 9 23:42:25.183906 waagent[2103]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 9 23:42:25.183906 waagent[2103]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 9 23:42:25.184385 waagent[2103]: 2025-09-09T23:42:25.184347Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 9 23:42:25.184620 waagent[2103]: 2025-09-09T23:42:25.184586Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 9 23:42:25.184672 waagent[2103]: 2025-09-09T23:42:25.184654Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 9 23:42:25.184774 waagent[2103]: 2025-09-09T23:42:25.184748Z INFO EnvHandler ExtHandler Configure routes Sep 9 23:42:25.184830 waagent[2103]: 2025-09-09T23:42:25.184814Z INFO EnvHandler ExtHandler Gateway:None Sep 9 23:42:25.184872 waagent[2103]: 2025-09-09T23:42:25.184855Z INFO EnvHandler ExtHandler Routes:None Sep 9 23:42:25.185304 waagent[2103]: 2025-09-09T23:42:25.185259Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 9 23:42:25.185410 waagent[2103]: 2025-09-09T23:42:25.185377Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 9 23:42:25.185829 waagent[2103]: 2025-09-09T23:42:25.185782Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 9 23:42:25.185899 waagent[2103]: 2025-09-09T23:42:25.185876Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 9 23:42:25.186006 waagent[2103]: 2025-09-09T23:42:25.185962Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 9 23:42:25.192538 waagent[2103]: 2025-09-09T23:42:25.192495Z INFO ExtHandler ExtHandler Sep 9 23:42:25.192759 waagent[2103]: 2025-09-09T23:42:25.192723Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 938d5c09-1abd-49e0-b796-93e6a24f71d5 correlation 491c9d11-d971-4eb3-8100-ccf6b4507a35 created: 2025-09-09T23:40:57.633148Z] Sep 9 23:42:25.193203 waagent[2103]: 2025-09-09T23:42:25.193156Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 9 23:42:25.193741 waagent[2103]: 2025-09-09T23:42:25.193700Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Sep 9 23:42:25.222006 waagent[2103]: 2025-09-09T23:42:25.221958Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Sep 9 23:42:25.222006 waagent[2103]: Try `iptables -h' or 'iptables --help' for more information.) Sep 9 23:42:25.222552 waagent[2103]: 2025-09-09T23:42:25.222520Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: A6BEB843-C672-4133-991D-7BF599E7E639;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Sep 9 23:42:25.275599 waagent[2103]: 2025-09-09T23:42:25.275517Z INFO MonitorHandler ExtHandler Network interfaces: Sep 9 23:42:25.275599 waagent[2103]: Executing ['ip', '-a', '-o', 'link']: Sep 9 23:42:25.275599 waagent[2103]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 9 23:42:25.275599 waagent[2103]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:78:f7:f0 brd ff:ff:ff:ff:ff:ff Sep 9 23:42:25.275599 waagent[2103]: 3: enP52629s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:78:f7:f0 brd ff:ff:ff:ff:ff:ff\ altname enP52629p0s2 Sep 9 23:42:25.275599 waagent[2103]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 9 23:42:25.275599 waagent[2103]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 9 23:42:25.275599 waagent[2103]: 2: eth0 inet 10.200.20.12/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 9 23:42:25.275599 waagent[2103]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 9 23:42:25.275599 waagent[2103]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Sep 9 23:42:25.275599 waagent[2103]: 2: eth0 inet6 fe80::222:48ff:fe78:f7f0/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 9 23:42:25.318508 waagent[2103]: 2025-09-09T23:42:25.318076Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Sep 9 23:42:25.318508 waagent[2103]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 9 23:42:25.318508 waagent[2103]: pkts bytes target prot opt in out source destination Sep 9 23:42:25.318508 waagent[2103]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 9 23:42:25.318508 waagent[2103]: pkts bytes target prot opt in out source destination Sep 9 23:42:25.318508 waagent[2103]: Chain OUTPUT (policy ACCEPT 5 packets, 646 bytes) Sep 9 23:42:25.318508 waagent[2103]: pkts bytes target prot opt in out source destination Sep 9 23:42:25.318508 waagent[2103]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 9 23:42:25.318508 waagent[2103]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 9 23:42:25.318508 waagent[2103]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 9 23:42:25.321596 waagent[2103]: 2025-09-09T23:42:25.321562Z INFO EnvHandler ExtHandler Current Firewall rules: Sep 9 23:42:25.321596 waagent[2103]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 9 23:42:25.321596 waagent[2103]: pkts bytes target prot opt in out source destination Sep 9 23:42:25.321596 waagent[2103]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 9 23:42:25.321596 waagent[2103]: pkts bytes target prot opt in out source destination Sep 9 23:42:25.321596 waagent[2103]: Chain OUTPUT (policy ACCEPT 5 packets, 646 bytes) Sep 9 23:42:25.321596 waagent[2103]: pkts bytes target prot opt in out source destination Sep 9 23:42:25.321596 waagent[2103]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 9 23:42:25.321596 waagent[2103]: 4 416 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 9 23:42:25.321596 waagent[2103]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 9 23:42:25.322056 waagent[2103]: 2025-09-09T23:42:25.322031Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 9 23:42:29.504645 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 23:42:29.505919 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:42:29.605423 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:42:29.612152 (kubelet)[2253]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:42:29.715989 kubelet[2253]: E0909 23:42:29.715929 2253 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:42:29.718300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:42:29.718420 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:42:29.719003 systemd[1]: kubelet.service: Consumed 115ms CPU time, 105.5M memory peak. Sep 9 23:42:39.756363 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 23:42:39.758219 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:42:39.865302 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:42:39.879126 (kubelet)[2267]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:42:39.951899 kubelet[2267]: E0909 23:42:39.951825 2267 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:42:39.954029 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:42:39.954265 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:42:39.954867 systemd[1]: kubelet.service: Consumed 108ms CPU time, 107M memory peak. Sep 9 23:42:41.757151 chronyd[1830]: Selected source PHC0 Sep 9 23:42:44.450295 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 23:42:44.452850 systemd[1]: Started sshd@0-10.200.20.12:22-10.200.16.10:59952.service - OpenSSH per-connection server daemon (10.200.16.10:59952). Sep 9 23:42:45.112047 sshd[2274]: Accepted publickey for core from 10.200.16.10 port 59952 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:42:45.113160 sshd-session[2274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:42:45.116892 systemd-logind[1852]: New session 3 of user core. Sep 9 23:42:45.124987 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 23:42:45.520260 systemd[1]: Started sshd@1-10.200.20.12:22-10.200.16.10:59954.service - OpenSSH per-connection server daemon (10.200.16.10:59954). Sep 9 23:42:45.935983 sshd[2280]: Accepted publickey for core from 10.200.16.10 port 59954 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:42:45.937294 sshd-session[2280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:42:45.940941 systemd-logind[1852]: New session 4 of user core. Sep 9 23:42:45.949010 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 23:42:46.252896 sshd[2283]: Connection closed by 10.200.16.10 port 59954 Sep 9 23:42:46.252908 sshd-session[2280]: pam_unix(sshd:session): session closed for user core Sep 9 23:42:46.255883 systemd[1]: sshd@1-10.200.20.12:22-10.200.16.10:59954.service: Deactivated successfully. Sep 9 23:42:46.257459 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 23:42:46.259298 systemd-logind[1852]: Session 4 logged out. Waiting for processes to exit. Sep 9 23:42:46.260189 systemd-logind[1852]: Removed session 4. Sep 9 23:42:46.347115 systemd[1]: Started sshd@2-10.200.20.12:22-10.200.16.10:59964.service - OpenSSH per-connection server daemon (10.200.16.10:59964). Sep 9 23:42:46.837975 sshd[2289]: Accepted publickey for core from 10.200.16.10 port 59964 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:42:46.839028 sshd-session[2289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:42:46.842512 systemd-logind[1852]: New session 5 of user core. Sep 9 23:42:46.853143 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 23:42:47.187289 sshd[2292]: Connection closed by 10.200.16.10 port 59964 Sep 9 23:42:47.186519 sshd-session[2289]: pam_unix(sshd:session): session closed for user core Sep 9 23:42:47.189623 systemd-logind[1852]: Session 5 logged out. Waiting for processes to exit. Sep 9 23:42:47.189765 systemd[1]: sshd@2-10.200.20.12:22-10.200.16.10:59964.service: Deactivated successfully. Sep 9 23:42:47.191267 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 23:42:47.193519 systemd-logind[1852]: Removed session 5. Sep 9 23:42:47.267964 systemd[1]: Started sshd@3-10.200.20.12:22-10.200.16.10:59978.service - OpenSSH per-connection server daemon (10.200.16.10:59978). Sep 9 23:42:47.730213 sshd[2298]: Accepted publickey for core from 10.200.16.10 port 59978 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:42:47.731319 sshd-session[2298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:42:47.735053 systemd-logind[1852]: New session 6 of user core. Sep 9 23:42:47.745115 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 23:42:48.063029 sshd[2301]: Connection closed by 10.200.16.10 port 59978 Sep 9 23:42:48.063653 sshd-session[2298]: pam_unix(sshd:session): session closed for user core Sep 9 23:42:48.066662 systemd-logind[1852]: Session 6 logged out. Waiting for processes to exit. Sep 9 23:42:48.068204 systemd[1]: sshd@3-10.200.20.12:22-10.200.16.10:59978.service: Deactivated successfully. Sep 9 23:42:48.070260 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 23:42:48.074100 systemd-logind[1852]: Removed session 6. Sep 9 23:42:48.155782 systemd[1]: Started sshd@4-10.200.20.12:22-10.200.16.10:59980.service - OpenSSH per-connection server daemon (10.200.16.10:59980). Sep 9 23:42:48.645285 sshd[2307]: Accepted publickey for core from 10.200.16.10 port 59980 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:42:48.646419 sshd-session[2307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:42:48.650256 systemd-logind[1852]: New session 7 of user core. Sep 9 23:42:48.659125 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 23:42:49.128471 sudo[2311]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 23:42:49.128725 sudo[2311]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:42:49.151517 sudo[2311]: pam_unix(sudo:session): session closed for user root Sep 9 23:42:49.229032 sshd[2310]: Connection closed by 10.200.16.10 port 59980 Sep 9 23:42:49.229714 sshd-session[2307]: pam_unix(sshd:session): session closed for user core Sep 9 23:42:49.233658 systemd[1]: sshd@4-10.200.20.12:22-10.200.16.10:59980.service: Deactivated successfully. Sep 9 23:42:49.235534 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 23:42:49.236189 systemd-logind[1852]: Session 7 logged out. Waiting for processes to exit. Sep 9 23:42:49.237460 systemd-logind[1852]: Removed session 7. Sep 9 23:42:49.315004 systemd[1]: Started sshd@5-10.200.20.12:22-10.200.16.10:59986.service - OpenSSH per-connection server daemon (10.200.16.10:59986). Sep 9 23:42:49.742430 sshd[2317]: Accepted publickey for core from 10.200.16.10 port 59986 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:42:49.743662 sshd-session[2317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:42:49.747920 systemd-logind[1852]: New session 8 of user core. Sep 9 23:42:49.753952 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 23:42:49.981556 sudo[2322]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 23:42:49.981785 sudo[2322]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:42:49.982658 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 9 23:42:49.984666 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:42:49.990256 sudo[2322]: pam_unix(sudo:session): session closed for user root Sep 9 23:42:49.996051 sudo[2321]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 23:42:49.996601 sudo[2321]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:42:50.015114 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 23:42:50.051028 augenrules[2347]: No rules Sep 9 23:42:50.053028 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 23:42:50.054839 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 23:42:50.056100 sudo[2321]: pam_unix(sudo:session): session closed for user root Sep 9 23:42:50.091921 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:42:50.098088 (kubelet)[2357]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:42:50.135672 sshd[2320]: Connection closed by 10.200.16.10 port 59986 Sep 9 23:42:50.136570 sshd-session[2317]: pam_unix(sshd:session): session closed for user core Sep 9 23:42:50.141371 systemd[1]: sshd@5-10.200.20.12:22-10.200.16.10:59986.service: Deactivated successfully. Sep 9 23:42:50.141663 systemd-logind[1852]: Session 8 logged out. Waiting for processes to exit. Sep 9 23:42:50.144691 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 23:42:50.146983 systemd-logind[1852]: Removed session 8. Sep 9 23:42:50.222432 kubelet[2357]: E0909 23:42:50.222379 2357 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:42:50.227790 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:42:50.228043 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:42:50.229895 systemd[1]: kubelet.service: Consumed 110ms CPU time, 107.8M memory peak. Sep 9 23:42:50.232004 systemd[1]: Started sshd@6-10.200.20.12:22-10.200.16.10:57598.service - OpenSSH per-connection server daemon (10.200.16.10:57598). Sep 9 23:42:50.696447 sshd[2367]: Accepted publickey for core from 10.200.16.10 port 57598 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:42:50.697562 sshd-session[2367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:42:50.702262 systemd-logind[1852]: New session 9 of user core. Sep 9 23:42:50.709976 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 23:42:50.954984 sudo[2371]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 23:42:50.955227 sudo[2371]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:42:52.591520 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 23:42:52.602294 (dockerd)[2389]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 23:42:53.673540 dockerd[2389]: time="2025-09-09T23:42:53.673282127Z" level=info msg="Starting up" Sep 9 23:42:53.674483 dockerd[2389]: time="2025-09-09T23:42:53.674456804Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 23:42:53.683523 dockerd[2389]: time="2025-09-09T23:42:53.683479880Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 9 23:42:53.715531 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3067467299-merged.mount: Deactivated successfully. Sep 9 23:42:53.784831 dockerd[2389]: time="2025-09-09T23:42:53.784770835Z" level=info msg="Loading containers: start." Sep 9 23:42:53.864829 kernel: Initializing XFRM netlink socket Sep 9 23:42:54.362178 systemd-networkd[1687]: docker0: Link UP Sep 9 23:42:54.382547 dockerd[2389]: time="2025-09-09T23:42:54.382496645Z" level=info msg="Loading containers: done." Sep 9 23:42:54.394849 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck320575799-merged.mount: Deactivated successfully. Sep 9 23:42:54.403029 dockerd[2389]: time="2025-09-09T23:42:54.402985312Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 23:42:54.403121 dockerd[2389]: time="2025-09-09T23:42:54.403077546Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 9 23:42:54.403190 dockerd[2389]: time="2025-09-09T23:42:54.403172709Z" level=info msg="Initializing buildkit" Sep 9 23:42:54.453672 dockerd[2389]: time="2025-09-09T23:42:54.453627250Z" level=info msg="Completed buildkit initialization" Sep 9 23:42:54.458955 dockerd[2389]: time="2025-09-09T23:42:54.458852153Z" level=info msg="Daemon has completed initialization" Sep 9 23:42:54.459235 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 23:42:54.459405 dockerd[2389]: time="2025-09-09T23:42:54.459361574Z" level=info msg="API listen on /run/docker.sock" Sep 9 23:42:55.246007 containerd[1869]: time="2025-09-09T23:42:55.245943328Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 9 23:42:55.948602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2449068226.mount: Deactivated successfully. Sep 9 23:42:56.945548 containerd[1869]: time="2025-09-09T23:42:56.945483973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:42:56.948346 containerd[1869]: time="2025-09-09T23:42:56.948167998Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=25652441" Sep 9 23:42:56.951502 containerd[1869]: time="2025-09-09T23:42:56.951474303Z" level=info msg="ImageCreate event name:\"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:42:56.956194 containerd[1869]: time="2025-09-09T23:42:56.956157593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:42:56.956745 containerd[1869]: time="2025-09-09T23:42:56.956717062Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"25649241\" in 1.710734373s" Sep 9 23:42:56.957035 containerd[1869]: time="2025-09-09T23:42:56.956852514Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\"" Sep 9 23:42:56.958267 containerd[1869]: time="2025-09-09T23:42:56.958244252Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 9 23:42:58.050406 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Sep 9 23:42:58.093494 containerd[1869]: time="2025-09-09T23:42:58.092880309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:42:58.095818 containerd[1869]: time="2025-09-09T23:42:58.095781004Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=22460309" Sep 9 23:42:58.099765 containerd[1869]: time="2025-09-09T23:42:58.099734347Z" level=info msg="ImageCreate event name:\"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:42:58.103941 containerd[1869]: time="2025-09-09T23:42:58.103898745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:42:58.104621 containerd[1869]: time="2025-09-09T23:42:58.104591665Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"23997423\" in 1.146301724s" Sep 9 23:42:58.104712 containerd[1869]: time="2025-09-09T23:42:58.104700817Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\"" Sep 9 23:42:58.105203 containerd[1869]: time="2025-09-09T23:42:58.105175017Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 9 23:42:59.293430 containerd[1869]: time="2025-09-09T23:42:59.293376869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:42:59.296282 containerd[1869]: time="2025-09-09T23:42:59.296240834Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=17125903" Sep 9 23:42:59.300784 containerd[1869]: time="2025-09-09T23:42:59.300735671Z" level=info msg="ImageCreate event name:\"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:42:59.305843 containerd[1869]: time="2025-09-09T23:42:59.305805435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:42:59.306790 containerd[1869]: time="2025-09-09T23:42:59.306568151Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"18663035\" in 1.201283551s" Sep 9 23:42:59.306790 containerd[1869]: time="2025-09-09T23:42:59.306595841Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\"" Sep 9 23:42:59.307047 containerd[1869]: time="2025-09-09T23:42:59.307012846Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 9 23:43:00.254683 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 9 23:43:00.257981 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:43:00.335757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3539218934.mount: Deactivated successfully. Sep 9 23:43:00.372265 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:43:00.377165 (kubelet)[2669]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:43:00.468631 kubelet[2669]: E0909 23:43:00.468528 2669 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:43:00.470790 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:43:00.471002 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:43:00.471908 systemd[1]: kubelet.service: Consumed 112ms CPU time, 107M memory peak. Sep 9 23:43:01.517636 containerd[1869]: time="2025-09-09T23:43:01.517447662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:01.520893 containerd[1869]: time="2025-09-09T23:43:01.520851312Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=26916095" Sep 9 23:43:01.524136 containerd[1869]: time="2025-09-09T23:43:01.523969392Z" level=info msg="ImageCreate event name:\"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:01.528584 containerd[1869]: time="2025-09-09T23:43:01.528533341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:01.529047 containerd[1869]: time="2025-09-09T23:43:01.528821798Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"26915114\" in 2.221779431s" Sep 9 23:43:01.529047 containerd[1869]: time="2025-09-09T23:43:01.528848055Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\"" Sep 9 23:43:01.529652 containerd[1869]: time="2025-09-09T23:43:01.529409976Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 23:43:02.232504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount573227943.mount: Deactivated successfully. Sep 9 23:43:03.146065 containerd[1869]: time="2025-09-09T23:43:03.146005135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:03.148814 containerd[1869]: time="2025-09-09T23:43:03.148755588Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Sep 9 23:43:03.152172 containerd[1869]: time="2025-09-09T23:43:03.152137469Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:03.157261 containerd[1869]: time="2025-09-09T23:43:03.157217970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:03.157972 containerd[1869]: time="2025-09-09T23:43:03.157722938Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.628287192s" Sep 9 23:43:03.157972 containerd[1869]: time="2025-09-09T23:43:03.157753331Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 9 23:43:03.158478 containerd[1869]: time="2025-09-09T23:43:03.158274427Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 23:43:03.706213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3714305836.mount: Deactivated successfully. Sep 9 23:43:03.728015 containerd[1869]: time="2025-09-09T23:43:03.727959995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 23:43:03.731128 containerd[1869]: time="2025-09-09T23:43:03.730955551Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 9 23:43:03.734395 containerd[1869]: time="2025-09-09T23:43:03.734364977Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 23:43:03.738823 containerd[1869]: time="2025-09-09T23:43:03.738725368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 23:43:03.739287 containerd[1869]: time="2025-09-09T23:43:03.739054866Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 580.753079ms" Sep 9 23:43:03.739287 containerd[1869]: time="2025-09-09T23:43:03.739087331Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 9 23:43:03.739870 containerd[1869]: time="2025-09-09T23:43:03.739507528Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 9 23:43:03.873011 update_engine[1853]: I20250909 23:43:03.872941 1853 update_attempter.cc:509] Updating boot flags... Sep 9 23:43:04.440364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount20146277.mount: Deactivated successfully. Sep 9 23:43:06.570852 containerd[1869]: time="2025-09-09T23:43:06.570634766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:06.574967 containerd[1869]: time="2025-09-09T23:43:06.574899391Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537161" Sep 9 23:43:06.578282 containerd[1869]: time="2025-09-09T23:43:06.578214373Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:06.583828 containerd[1869]: time="2025-09-09T23:43:06.583471995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:06.584092 containerd[1869]: time="2025-09-09T23:43:06.584055988Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.844523003s" Sep 9 23:43:06.584092 containerd[1869]: time="2025-09-09T23:43:06.584093717Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 9 23:43:08.567650 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:43:08.567840 systemd[1]: kubelet.service: Consumed 112ms CPU time, 107M memory peak. Sep 9 23:43:08.576970 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:43:08.590706 systemd[1]: Reload requested from client PID 2880 ('systemctl') (unit session-9.scope)... Sep 9 23:43:08.590718 systemd[1]: Reloading... Sep 9 23:43:08.695823 zram_generator::config[2926]: No configuration found. Sep 9 23:43:08.853680 systemd[1]: Reloading finished in 262 ms. Sep 9 23:43:08.903389 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 23:43:08.903465 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 23:43:08.903687 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:43:08.903738 systemd[1]: kubelet.service: Consumed 75ms CPU time, 95M memory peak. Sep 9 23:43:08.905414 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:43:09.130610 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:43:09.141092 (kubelet)[2993]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 23:43:09.168041 kubelet[2993]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:43:09.168041 kubelet[2993]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 23:43:09.168041 kubelet[2993]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:43:09.168041 kubelet[2993]: I0909 23:43:09.167344 2993 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 23:43:09.397134 kubelet[2993]: I0909 23:43:09.397019 2993 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 23:43:09.397271 kubelet[2993]: I0909 23:43:09.397260 2993 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 23:43:09.397575 kubelet[2993]: I0909 23:43:09.397556 2993 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 23:43:09.417172 kubelet[2993]: E0909 23:43:09.417116 2993 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:43:09.418020 kubelet[2993]: I0909 23:43:09.417996 2993 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 23:43:09.424105 kubelet[2993]: I0909 23:43:09.424083 2993 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 23:43:09.428230 kubelet[2993]: I0909 23:43:09.428180 2993 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 23:43:09.429045 kubelet[2993]: I0909 23:43:09.429015 2993 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 23:43:09.429182 kubelet[2993]: I0909 23:43:09.429151 2993 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 23:43:09.429386 kubelet[2993]: I0909 23:43:09.429183 2993 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4426.0.0-n-3e4141976f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 23:43:09.429386 kubelet[2993]: I0909 23:43:09.429386 2993 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 23:43:09.429503 kubelet[2993]: I0909 23:43:09.429394 2993 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 23:43:09.429548 kubelet[2993]: I0909 23:43:09.429531 2993 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:43:09.432581 kubelet[2993]: I0909 23:43:09.432334 2993 kubelet.go:408] "Attempting to sync node with API server" Sep 9 23:43:09.432581 kubelet[2993]: I0909 23:43:09.432375 2993 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 23:43:09.432581 kubelet[2993]: I0909 23:43:09.432402 2993 kubelet.go:314] "Adding apiserver pod source" Sep 9 23:43:09.432581 kubelet[2993]: I0909 23:43:09.432414 2993 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 23:43:09.435501 kubelet[2993]: W0909 23:43:09.435442 2993 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4426.0.0-n-3e4141976f&limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Sep 9 23:43:09.435559 kubelet[2993]: E0909 23:43:09.435504 2993 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4426.0.0-n-3e4141976f&limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:43:09.435880 kubelet[2993]: W0909 23:43:09.435848 2993 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Sep 9 23:43:09.435958 kubelet[2993]: E0909 23:43:09.435890 2993 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:43:09.436816 kubelet[2993]: I0909 23:43:09.435981 2993 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 23:43:09.436816 kubelet[2993]: I0909 23:43:09.436315 2993 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 23:43:09.436816 kubelet[2993]: W0909 23:43:09.436360 2993 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 23:43:09.437616 kubelet[2993]: I0909 23:43:09.437591 2993 server.go:1274] "Started kubelet" Sep 9 23:43:09.440132 kubelet[2993]: I0909 23:43:09.440104 2993 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 23:43:09.440460 kubelet[2993]: I0909 23:43:09.440402 2993 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 23:43:09.440743 kubelet[2993]: I0909 23:43:09.440715 2993 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 23:43:09.441071 kubelet[2993]: I0909 23:43:09.441055 2993 server.go:449] "Adding debug handlers to kubelet server" Sep 9 23:43:09.441901 kubelet[2993]: E0909 23:43:09.440871 2993 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.12:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.12:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4426.0.0-n-3e4141976f.1863c1d27fe40d98 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4426.0.0-n-3e4141976f,UID:ci-4426.0.0-n-3e4141976f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4426.0.0-n-3e4141976f,},FirstTimestamp:2025-09-09 23:43:09.437570456 +0000 UTC m=+0.293695072,LastTimestamp:2025-09-09 23:43:09.437570456 +0000 UTC m=+0.293695072,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4426.0.0-n-3e4141976f,}" Sep 9 23:43:09.444615 kubelet[2993]: I0909 23:43:09.443784 2993 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 23:43:09.444615 kubelet[2993]: I0909 23:43:09.443831 2993 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 23:43:09.444615 kubelet[2993]: I0909 23:43:09.444134 2993 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 23:43:09.444966 kubelet[2993]: I0909 23:43:09.444942 2993 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 23:43:09.445016 kubelet[2993]: I0909 23:43:09.444997 2993 reconciler.go:26] "Reconciler: start to sync state" Sep 9 23:43:09.446137 kubelet[2993]: W0909 23:43:09.445783 2993 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Sep 9 23:43:09.446137 kubelet[2993]: E0909 23:43:09.445842 2993 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:43:09.446137 kubelet[2993]: E0909 23:43:09.445876 2993 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4426.0.0-n-3e4141976f\" not found" Sep 9 23:43:09.446268 kubelet[2993]: E0909 23:43:09.446147 2993 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4426.0.0-n-3e4141976f?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="200ms" Sep 9 23:43:09.446309 kubelet[2993]: I0909 23:43:09.446291 2993 factory.go:221] Registration of the systemd container factory successfully Sep 9 23:43:09.446377 kubelet[2993]: I0909 23:43:09.446362 2993 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 23:43:09.447325 kubelet[2993]: E0909 23:43:09.447306 2993 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 23:43:09.447444 kubelet[2993]: I0909 23:43:09.447404 2993 factory.go:221] Registration of the containerd container factory successfully Sep 9 23:43:09.475271 kubelet[2993]: I0909 23:43:09.475247 2993 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 23:43:09.475506 kubelet[2993]: I0909 23:43:09.475496 2993 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 23:43:09.475655 kubelet[2993]: I0909 23:43:09.475617 2993 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:43:09.546759 kubelet[2993]: E0909 23:43:09.546718 2993 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4426.0.0-n-3e4141976f\" not found" Sep 9 23:43:09.647119 kubelet[2993]: E0909 23:43:09.647078 2993 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4426.0.0-n-3e4141976f\" not found" Sep 9 23:43:09.647499 kubelet[2993]: E0909 23:43:09.647351 2993 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4426.0.0-n-3e4141976f?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="400ms" Sep 9 23:43:09.695823 kubelet[2993]: I0909 23:43:09.695755 2993 policy_none.go:49] "None policy: Start" Sep 9 23:43:09.697109 kubelet[2993]: I0909 23:43:09.696953 2993 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 23:43:09.697109 kubelet[2993]: I0909 23:43:09.697035 2993 state_mem.go:35] "Initializing new in-memory state store" Sep 9 23:43:09.706897 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 23:43:09.721336 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 23:43:09.724623 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 23:43:09.734652 kubelet[2993]: I0909 23:43:09.734624 2993 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 23:43:09.735590 kubelet[2993]: I0909 23:43:09.734965 2993 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 23:43:09.735590 kubelet[2993]: I0909 23:43:09.734983 2993 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 23:43:09.735590 kubelet[2993]: I0909 23:43:09.735358 2993 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 23:43:09.739533 kubelet[2993]: I0909 23:43:09.739507 2993 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 23:43:09.741326 kubelet[2993]: E0909 23:43:09.741223 2993 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4426.0.0-n-3e4141976f\" not found" Sep 9 23:43:09.741398 kubelet[2993]: I0909 23:43:09.741349 2993 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 23:43:09.741398 kubelet[2993]: I0909 23:43:09.741365 2993 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 23:43:09.741398 kubelet[2993]: I0909 23:43:09.741380 2993 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 23:43:09.741463 kubelet[2993]: E0909 23:43:09.741410 2993 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Sep 9 23:43:09.743370 kubelet[2993]: W0909 23:43:09.743346 2993 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Sep 9 23:43:09.743498 kubelet[2993]: E0909 23:43:09.743378 2993 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:43:09.836868 kubelet[2993]: I0909 23:43:09.836834 2993 kubelet_node_status.go:72] "Attempting to register node" node="ci-4426.0.0-n-3e4141976f" Sep 9 23:43:09.837266 kubelet[2993]: E0909 23:43:09.837240 2993 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-4426.0.0-n-3e4141976f" Sep 9 23:43:09.847646 kubelet[2993]: I0909 23:43:09.847046 2993 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4699429a6c144941c0f94f28c22636d7-k8s-certs\") pod \"kube-apiserver-ci-4426.0.0-n-3e4141976f\" (UID: \"4699429a6c144941c0f94f28c22636d7\") " pod="kube-system/kube-apiserver-ci-4426.0.0-n-3e4141976f" Sep 9 23:43:09.847646 kubelet[2993]: I0909 23:43:09.847088 2993 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4699429a6c144941c0f94f28c22636d7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4426.0.0-n-3e4141976f\" (UID: \"4699429a6c144941c0f94f28c22636d7\") " pod="kube-system/kube-apiserver-ci-4426.0.0-n-3e4141976f" Sep 9 23:43:09.847646 kubelet[2993]: I0909 23:43:09.847104 2993 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7151436c23dcb766beecebfeb5b51de9-ca-certs\") pod \"kube-controller-manager-ci-4426.0.0-n-3e4141976f\" (UID: \"7151436c23dcb766beecebfeb5b51de9\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-n-3e4141976f" Sep 9 23:43:09.847646 kubelet[2993]: I0909 23:43:09.847117 2993 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7151436c23dcb766beecebfeb5b51de9-flexvolume-dir\") pod \"kube-controller-manager-ci-4426.0.0-n-3e4141976f\" (UID: \"7151436c23dcb766beecebfeb5b51de9\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-n-3e4141976f" Sep 9 23:43:09.847646 kubelet[2993]: I0909 23:43:09.847128 2993 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7151436c23dcb766beecebfeb5b51de9-k8s-certs\") pod \"kube-controller-manager-ci-4426.0.0-n-3e4141976f\" (UID: \"7151436c23dcb766beecebfeb5b51de9\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-n-3e4141976f" Sep 9 23:43:09.847867 kubelet[2993]: I0909 23:43:09.847137 2993 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7151436c23dcb766beecebfeb5b51de9-kubeconfig\") pod \"kube-controller-manager-ci-4426.0.0-n-3e4141976f\" (UID: \"7151436c23dcb766beecebfeb5b51de9\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-n-3e4141976f" Sep 9 23:43:09.847867 kubelet[2993]: I0909 23:43:09.847147 2993 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7151436c23dcb766beecebfeb5b51de9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4426.0.0-n-3e4141976f\" (UID: \"7151436c23dcb766beecebfeb5b51de9\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-n-3e4141976f" Sep 9 23:43:09.847867 kubelet[2993]: I0909 23:43:09.847174 2993 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4699429a6c144941c0f94f28c22636d7-ca-certs\") pod \"kube-apiserver-ci-4426.0.0-n-3e4141976f\" (UID: \"4699429a6c144941c0f94f28c22636d7\") " pod="kube-system/kube-apiserver-ci-4426.0.0-n-3e4141976f" Sep 9 23:43:09.853327 systemd[1]: Created slice kubepods-burstable-pod4699429a6c144941c0f94f28c22636d7.slice - libcontainer container kubepods-burstable-pod4699429a6c144941c0f94f28c22636d7.slice. Sep 9 23:43:09.874283 systemd[1]: Created slice kubepods-burstable-pod7151436c23dcb766beecebfeb5b51de9.slice - libcontainer container kubepods-burstable-pod7151436c23dcb766beecebfeb5b51de9.slice. Sep 9 23:43:09.898934 systemd[1]: Created slice kubepods-burstable-pode8de74df8d011c817b5edde1b1a7eabe.slice - libcontainer container kubepods-burstable-pode8de74df8d011c817b5edde1b1a7eabe.slice. Sep 9 23:43:09.947450 kubelet[2993]: I0909 23:43:09.947409 2993 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e8de74df8d011c817b5edde1b1a7eabe-kubeconfig\") pod \"kube-scheduler-ci-4426.0.0-n-3e4141976f\" (UID: \"e8de74df8d011c817b5edde1b1a7eabe\") " pod="kube-system/kube-scheduler-ci-4426.0.0-n-3e4141976f" Sep 9 23:43:10.039341 kubelet[2993]: I0909 23:43:10.039312 2993 kubelet_node_status.go:72] "Attempting to register node" node="ci-4426.0.0-n-3e4141976f" Sep 9 23:43:10.039679 kubelet[2993]: E0909 23:43:10.039654 2993 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-4426.0.0-n-3e4141976f" Sep 9 23:43:10.048176 kubelet[2993]: E0909 23:43:10.048138 2993 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4426.0.0-n-3e4141976f?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="800ms" Sep 9 23:43:10.173642 containerd[1869]: time="2025-09-09T23:43:10.173509143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4426.0.0-n-3e4141976f,Uid:4699429a6c144941c0f94f28c22636d7,Namespace:kube-system,Attempt:0,}" Sep 9 23:43:10.178073 containerd[1869]: time="2025-09-09T23:43:10.178039889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4426.0.0-n-3e4141976f,Uid:7151436c23dcb766beecebfeb5b51de9,Namespace:kube-system,Attempt:0,}" Sep 9 23:43:10.202157 containerd[1869]: time="2025-09-09T23:43:10.202102974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4426.0.0-n-3e4141976f,Uid:e8de74df8d011c817b5edde1b1a7eabe,Namespace:kube-system,Attempt:0,}" Sep 9 23:43:10.277573 kubelet[2993]: W0909 23:43:10.277444 2993 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4426.0.0-n-3e4141976f&limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Sep 9 23:43:10.277938 kubelet[2993]: E0909 23:43:10.277602 2993 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4426.0.0-n-3e4141976f&limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:43:10.283661 containerd[1869]: time="2025-09-09T23:43:10.283612567Z" level=info msg="connecting to shim 4e04604cd7c9dba568fc37ea2eaba397383fc88b4a1ebc36d8909be687380cef" address="unix:///run/containerd/s/f6f9631735c418fa975e4296fd864c251c568751e644b767eaf11619651878d6" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:43:10.283978 containerd[1869]: time="2025-09-09T23:43:10.283924200Z" level=info msg="connecting to shim f85e71826f36b3570ad33bf78d6ac1c1763d51cb88b94bbc938ffda1ff750e03" address="unix:///run/containerd/s/f6aa7f5252968a74c3b6bec15687315d0205903fc0bf9062c39a7a5c530eb1ff" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:43:10.309526 containerd[1869]: time="2025-09-09T23:43:10.309477447Z" level=info msg="connecting to shim 457aad9570af4eab4de5fb3c0f4fa27ca107f3825f2d9563858e2528d31a1abf" address="unix:///run/containerd/s/2ce4bc9d8cc59b29c9fb574788d090e18ffd415a29520d2c40480da2a96ffd57" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:43:10.319003 systemd[1]: Started cri-containerd-f85e71826f36b3570ad33bf78d6ac1c1763d51cb88b94bbc938ffda1ff750e03.scope - libcontainer container f85e71826f36b3570ad33bf78d6ac1c1763d51cb88b94bbc938ffda1ff750e03. Sep 9 23:43:10.327310 kubelet[2993]: W0909 23:43:10.326848 2993 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Sep 9 23:43:10.327169 systemd[1]: Started cri-containerd-4e04604cd7c9dba568fc37ea2eaba397383fc88b4a1ebc36d8909be687380cef.scope - libcontainer container 4e04604cd7c9dba568fc37ea2eaba397383fc88b4a1ebc36d8909be687380cef. Sep 9 23:43:10.327645 kubelet[2993]: E0909 23:43:10.327594 2993 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:43:10.357974 systemd[1]: Started cri-containerd-457aad9570af4eab4de5fb3c0f4fa27ca107f3825f2d9563858e2528d31a1abf.scope - libcontainer container 457aad9570af4eab4de5fb3c0f4fa27ca107f3825f2d9563858e2528d31a1abf. Sep 9 23:43:10.390508 containerd[1869]: time="2025-09-09T23:43:10.390441271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4426.0.0-n-3e4141976f,Uid:4699429a6c144941c0f94f28c22636d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e04604cd7c9dba568fc37ea2eaba397383fc88b4a1ebc36d8909be687380cef\"" Sep 9 23:43:10.395362 containerd[1869]: time="2025-09-09T23:43:10.395307890Z" level=info msg="CreateContainer within sandbox \"4e04604cd7c9dba568fc37ea2eaba397383fc88b4a1ebc36d8909be687380cef\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 23:43:10.400703 containerd[1869]: time="2025-09-09T23:43:10.400656058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4426.0.0-n-3e4141976f,Uid:7151436c23dcb766beecebfeb5b51de9,Namespace:kube-system,Attempt:0,} returns sandbox id \"f85e71826f36b3570ad33bf78d6ac1c1763d51cb88b94bbc938ffda1ff750e03\"" Sep 9 23:43:10.407811 containerd[1869]: time="2025-09-09T23:43:10.407566935Z" level=info msg="CreateContainer within sandbox \"f85e71826f36b3570ad33bf78d6ac1c1763d51cb88b94bbc938ffda1ff750e03\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 23:43:10.435680 containerd[1869]: time="2025-09-09T23:43:10.435228410Z" level=info msg="Container 57b68c48e3fb32941d52967c6517fe0827ccc61b40950298718350c726dcbe06: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:43:10.437752 containerd[1869]: time="2025-09-09T23:43:10.437708528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4426.0.0-n-3e4141976f,Uid:e8de74df8d011c817b5edde1b1a7eabe,Namespace:kube-system,Attempt:0,} returns sandbox id \"457aad9570af4eab4de5fb3c0f4fa27ca107f3825f2d9563858e2528d31a1abf\"" Sep 9 23:43:10.442414 kubelet[2993]: I0909 23:43:10.442223 2993 kubelet_node_status.go:72] "Attempting to register node" node="ci-4426.0.0-n-3e4141976f" Sep 9 23:43:10.442996 kubelet[2993]: E0909 23:43:10.442968 2993 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-4426.0.0-n-3e4141976f" Sep 9 23:43:10.445288 containerd[1869]: time="2025-09-09T23:43:10.445205294Z" level=info msg="CreateContainer within sandbox \"457aad9570af4eab4de5fb3c0f4fa27ca107f3825f2d9563858e2528d31a1abf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 23:43:10.456992 containerd[1869]: time="2025-09-09T23:43:10.456889930Z" level=info msg="CreateContainer within sandbox \"4e04604cd7c9dba568fc37ea2eaba397383fc88b4a1ebc36d8909be687380cef\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"57b68c48e3fb32941d52967c6517fe0827ccc61b40950298718350c726dcbe06\"" Sep 9 23:43:10.457826 containerd[1869]: time="2025-09-09T23:43:10.457780084Z" level=info msg="StartContainer for \"57b68c48e3fb32941d52967c6517fe0827ccc61b40950298718350c726dcbe06\"" Sep 9 23:43:10.459101 containerd[1869]: time="2025-09-09T23:43:10.459045504Z" level=info msg="connecting to shim 57b68c48e3fb32941d52967c6517fe0827ccc61b40950298718350c726dcbe06" address="unix:///run/containerd/s/f6f9631735c418fa975e4296fd864c251c568751e644b767eaf11619651878d6" protocol=ttrpc version=3 Sep 9 23:43:10.469840 containerd[1869]: time="2025-09-09T23:43:10.469733416Z" level=info msg="Container a40a2c985689588bd06b64c786223fcd5d44170ad31f89224b8a47b95e9f7c66: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:43:10.474970 systemd[1]: Started cri-containerd-57b68c48e3fb32941d52967c6517fe0827ccc61b40950298718350c726dcbe06.scope - libcontainer container 57b68c48e3fb32941d52967c6517fe0827ccc61b40950298718350c726dcbe06. Sep 9 23:43:10.503518 containerd[1869]: time="2025-09-09T23:43:10.503463896Z" level=info msg="Container 0645dfe15c0b32b0c030263fe484465ff8a44ed3a168323a17e5f7a81ebd3fdd: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:43:10.507532 containerd[1869]: time="2025-09-09T23:43:10.507328806Z" level=info msg="CreateContainer within sandbox \"f85e71826f36b3570ad33bf78d6ac1c1763d51cb88b94bbc938ffda1ff750e03\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a40a2c985689588bd06b64c786223fcd5d44170ad31f89224b8a47b95e9f7c66\"" Sep 9 23:43:10.509474 containerd[1869]: time="2025-09-09T23:43:10.509104584Z" level=info msg="StartContainer for \"a40a2c985689588bd06b64c786223fcd5d44170ad31f89224b8a47b95e9f7c66\"" Sep 9 23:43:10.510336 containerd[1869]: time="2025-09-09T23:43:10.510297354Z" level=info msg="connecting to shim a40a2c985689588bd06b64c786223fcd5d44170ad31f89224b8a47b95e9f7c66" address="unix:///run/containerd/s/f6aa7f5252968a74c3b6bec15687315d0205903fc0bf9062c39a7a5c530eb1ff" protocol=ttrpc version=3 Sep 9 23:43:10.524629 containerd[1869]: time="2025-09-09T23:43:10.524589049Z" level=info msg="StartContainer for \"57b68c48e3fb32941d52967c6517fe0827ccc61b40950298718350c726dcbe06\" returns successfully" Sep 9 23:43:10.528502 containerd[1869]: time="2025-09-09T23:43:10.528456383Z" level=info msg="CreateContainer within sandbox \"457aad9570af4eab4de5fb3c0f4fa27ca107f3825f2d9563858e2528d31a1abf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0645dfe15c0b32b0c030263fe484465ff8a44ed3a168323a17e5f7a81ebd3fdd\"" Sep 9 23:43:10.529984 containerd[1869]: time="2025-09-09T23:43:10.529845182Z" level=info msg="StartContainer for \"0645dfe15c0b32b0c030263fe484465ff8a44ed3a168323a17e5f7a81ebd3fdd\"" Sep 9 23:43:10.531782 containerd[1869]: time="2025-09-09T23:43:10.531742533Z" level=info msg="connecting to shim 0645dfe15c0b32b0c030263fe484465ff8a44ed3a168323a17e5f7a81ebd3fdd" address="unix:///run/containerd/s/2ce4bc9d8cc59b29c9fb574788d090e18ffd415a29520d2c40480da2a96ffd57" protocol=ttrpc version=3 Sep 9 23:43:10.536011 systemd[1]: Started cri-containerd-a40a2c985689588bd06b64c786223fcd5d44170ad31f89224b8a47b95e9f7c66.scope - libcontainer container a40a2c985689588bd06b64c786223fcd5d44170ad31f89224b8a47b95e9f7c66. Sep 9 23:43:10.561991 systemd[1]: Started cri-containerd-0645dfe15c0b32b0c030263fe484465ff8a44ed3a168323a17e5f7a81ebd3fdd.scope - libcontainer container 0645dfe15c0b32b0c030263fe484465ff8a44ed3a168323a17e5f7a81ebd3fdd. Sep 9 23:43:10.608443 containerd[1869]: time="2025-09-09T23:43:10.608393682Z" level=info msg="StartContainer for \"a40a2c985689588bd06b64c786223fcd5d44170ad31f89224b8a47b95e9f7c66\" returns successfully" Sep 9 23:43:10.626722 containerd[1869]: time="2025-09-09T23:43:10.626678114Z" level=info msg="StartContainer for \"0645dfe15c0b32b0c030263fe484465ff8a44ed3a168323a17e5f7a81ebd3fdd\" returns successfully" Sep 9 23:43:11.245602 kubelet[2993]: I0909 23:43:11.245570 2993 kubelet_node_status.go:72] "Attempting to register node" node="ci-4426.0.0-n-3e4141976f" Sep 9 23:43:11.903542 kubelet[2993]: E0909 23:43:11.903480 2993 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4426.0.0-n-3e4141976f\" not found" node="ci-4426.0.0-n-3e4141976f" Sep 9 23:43:12.300608 kubelet[2993]: I0909 23:43:12.300472 2993 kubelet_node_status.go:75] "Successfully registered node" node="ci-4426.0.0-n-3e4141976f" Sep 9 23:43:12.300608 kubelet[2993]: E0909 23:43:12.300523 2993 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4426.0.0-n-3e4141976f\": node \"ci-4426.0.0-n-3e4141976f\" not found" Sep 9 23:43:12.431223 kubelet[2993]: E0909 23:43:12.431155 2993 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4426.0.0-n-3e4141976f\" not found" Sep 9 23:43:12.532076 kubelet[2993]: E0909 23:43:12.532024 2993 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4426.0.0-n-3e4141976f\" not found" Sep 9 23:43:12.633122 kubelet[2993]: E0909 23:43:12.632991 2993 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4426.0.0-n-3e4141976f\" not found" Sep 9 23:43:12.772530 kubelet[2993]: E0909 23:43:12.771668 2993 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4426.0.0-n-3e4141976f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4426.0.0-n-3e4141976f" Sep 9 23:43:13.436801 kubelet[2993]: I0909 23:43:13.436740 2993 apiserver.go:52] "Watching apiserver" Sep 9 23:43:13.445208 kubelet[2993]: I0909 23:43:13.445167 2993 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 23:43:14.234766 systemd[1]: Reload requested from client PID 3262 ('systemctl') (unit session-9.scope)... Sep 9 23:43:14.234782 systemd[1]: Reloading... Sep 9 23:43:14.318836 zram_generator::config[3309]: No configuration found. Sep 9 23:43:14.488951 systemd[1]: Reloading finished in 253 ms. Sep 9 23:43:14.504925 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:43:14.523080 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 23:43:14.523311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:43:14.523385 systemd[1]: kubelet.service: Consumed 553ms CPU time, 124.9M memory peak. Sep 9 23:43:14.525487 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:43:14.633525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:43:14.646145 (kubelet)[3373]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 23:43:14.764541 kubelet[3373]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:43:14.764987 kubelet[3373]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 23:43:14.765058 kubelet[3373]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:43:14.765204 kubelet[3373]: I0909 23:43:14.765156 3373 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 23:43:14.770661 kubelet[3373]: I0909 23:43:14.770629 3373 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 23:43:14.770853 kubelet[3373]: I0909 23:43:14.770840 3373 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 23:43:14.771082 kubelet[3373]: I0909 23:43:14.771065 3373 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 23:43:14.772347 kubelet[3373]: I0909 23:43:14.772322 3373 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 23:43:14.774058 kubelet[3373]: I0909 23:43:14.774029 3373 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 23:43:14.780562 kubelet[3373]: I0909 23:43:14.780521 3373 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 23:43:14.784154 kubelet[3373]: I0909 23:43:14.784112 3373 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 23:43:14.784420 kubelet[3373]: I0909 23:43:14.784404 3373 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 23:43:14.784740 kubelet[3373]: I0909 23:43:14.784701 3373 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 23:43:14.784991 kubelet[3373]: I0909 23:43:14.784837 3373 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4426.0.0-n-3e4141976f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 23:43:14.785122 kubelet[3373]: I0909 23:43:14.785109 3373 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 23:43:14.785175 kubelet[3373]: I0909 23:43:14.785167 3373 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 23:43:14.785258 kubelet[3373]: I0909 23:43:14.785249 3373 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:43:14.785435 kubelet[3373]: I0909 23:43:14.785420 3373 kubelet.go:408] "Attempting to sync node with API server" Sep 9 23:43:14.785524 kubelet[3373]: I0909 23:43:14.785515 3373 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 23:43:14.785587 kubelet[3373]: I0909 23:43:14.785580 3373 kubelet.go:314] "Adding apiserver pod source" Sep 9 23:43:14.785645 kubelet[3373]: I0909 23:43:14.785637 3373 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 23:43:14.786739 kubelet[3373]: I0909 23:43:14.786721 3373 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 23:43:14.787290 kubelet[3373]: I0909 23:43:14.787271 3373 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 23:43:14.787836 kubelet[3373]: I0909 23:43:14.787816 3373 server.go:1274] "Started kubelet" Sep 9 23:43:14.789774 kubelet[3373]: I0909 23:43:14.789739 3373 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 23:43:14.799086 kubelet[3373]: I0909 23:43:14.799037 3373 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 23:43:14.799954 kubelet[3373]: I0909 23:43:14.799923 3373 server.go:449] "Adding debug handlers to kubelet server" Sep 9 23:43:14.800882 kubelet[3373]: I0909 23:43:14.800737 3373 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 23:43:14.801264 kubelet[3373]: I0909 23:43:14.801240 3373 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 23:43:14.801835 kubelet[3373]: I0909 23:43:14.801763 3373 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 23:43:14.802835 kubelet[3373]: I0909 23:43:14.802775 3373 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 23:43:14.803156 kubelet[3373]: E0909 23:43:14.803124 3373 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4426.0.0-n-3e4141976f\" not found" Sep 9 23:43:14.804568 kubelet[3373]: I0909 23:43:14.804522 3373 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 23:43:14.805772 kubelet[3373]: I0909 23:43:14.805749 3373 reconciler.go:26] "Reconciler: start to sync state" Sep 9 23:43:14.807437 kubelet[3373]: I0909 23:43:14.807405 3373 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 23:43:14.808901 kubelet[3373]: I0909 23:43:14.808874 3373 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 23:43:14.809019 kubelet[3373]: I0909 23:43:14.809007 3373 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 23:43:14.809076 kubelet[3373]: I0909 23:43:14.809067 3373 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 23:43:14.809161 kubelet[3373]: E0909 23:43:14.809145 3373 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 23:43:14.812313 kubelet[3373]: I0909 23:43:14.812262 3373 factory.go:221] Registration of the systemd container factory successfully Sep 9 23:43:14.812399 kubelet[3373]: I0909 23:43:14.812376 3373 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 23:43:14.823655 kubelet[3373]: I0909 23:43:14.823595 3373 factory.go:221] Registration of the containerd container factory successfully Sep 9 23:43:14.824455 kubelet[3373]: E0909 23:43:14.824427 3373 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 23:43:14.860284 kubelet[3373]: I0909 23:43:14.860254 3373 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 23:43:14.860453 kubelet[3373]: I0909 23:43:14.860439 3373 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 23:43:14.860501 kubelet[3373]: I0909 23:43:14.860494 3373 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:43:14.860711 kubelet[3373]: I0909 23:43:14.860693 3373 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 23:43:14.860813 kubelet[3373]: I0909 23:43:14.860768 3373 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 23:43:14.860878 kubelet[3373]: I0909 23:43:14.860869 3373 policy_none.go:49] "None policy: Start" Sep 9 23:43:14.861765 kubelet[3373]: I0909 23:43:14.861727 3373 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 23:43:14.861765 kubelet[3373]: I0909 23:43:14.861763 3373 state_mem.go:35] "Initializing new in-memory state store" Sep 9 23:43:14.862030 kubelet[3373]: I0909 23:43:14.862010 3373 state_mem.go:75] "Updated machine memory state" Sep 9 23:43:14.866647 kubelet[3373]: I0909 23:43:14.866615 3373 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 23:43:14.866813 kubelet[3373]: I0909 23:43:14.866780 3373 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 23:43:14.867287 kubelet[3373]: I0909 23:43:14.867243 3373 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 23:43:14.867829 kubelet[3373]: I0909 23:43:14.867464 3373 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 23:43:14.917465 kubelet[3373]: W0909 23:43:14.917429 3373 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 23:43:14.923616 kubelet[3373]: W0909 23:43:14.923582 3373 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 23:43:14.924112 kubelet[3373]: W0909 23:43:14.924065 3373 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 23:43:14.970620 kubelet[3373]: I0909 23:43:14.970448 3373 kubelet_node_status.go:72] "Attempting to register node" node="ci-4426.0.0-n-3e4141976f" Sep 9 23:43:14.982216 kubelet[3373]: I0909 23:43:14.982142 3373 kubelet_node_status.go:111] "Node was previously registered" node="ci-4426.0.0-n-3e4141976f" Sep 9 23:43:14.982712 kubelet[3373]: I0909 23:43:14.982439 3373 kubelet_node_status.go:75] "Successfully registered node" node="ci-4426.0.0-n-3e4141976f" Sep 9 23:43:15.006470 kubelet[3373]: I0909 23:43:15.006365 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7151436c23dcb766beecebfeb5b51de9-k8s-certs\") pod \"kube-controller-manager-ci-4426.0.0-n-3e4141976f\" (UID: \"7151436c23dcb766beecebfeb5b51de9\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-n-3e4141976f" Sep 9 23:43:15.006683 kubelet[3373]: I0909 23:43:15.006666 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7151436c23dcb766beecebfeb5b51de9-kubeconfig\") pod \"kube-controller-manager-ci-4426.0.0-n-3e4141976f\" (UID: \"7151436c23dcb766beecebfeb5b51de9\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-n-3e4141976f" Sep 9 23:43:15.006767 kubelet[3373]: I0909 23:43:15.006753 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e8de74df8d011c817b5edde1b1a7eabe-kubeconfig\") pod \"kube-scheduler-ci-4426.0.0-n-3e4141976f\" (UID: \"e8de74df8d011c817b5edde1b1a7eabe\") " pod="kube-system/kube-scheduler-ci-4426.0.0-n-3e4141976f" Sep 9 23:43:15.006898 kubelet[3373]: I0909 23:43:15.006885 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4699429a6c144941c0f94f28c22636d7-ca-certs\") pod \"kube-apiserver-ci-4426.0.0-n-3e4141976f\" (UID: \"4699429a6c144941c0f94f28c22636d7\") " pod="kube-system/kube-apiserver-ci-4426.0.0-n-3e4141976f" Sep 9 23:43:15.006976 kubelet[3373]: I0909 23:43:15.006966 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7151436c23dcb766beecebfeb5b51de9-ca-certs\") pod \"kube-controller-manager-ci-4426.0.0-n-3e4141976f\" (UID: \"7151436c23dcb766beecebfeb5b51de9\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-n-3e4141976f" Sep 9 23:43:15.007045 kubelet[3373]: I0909 23:43:15.007037 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7151436c23dcb766beecebfeb5b51de9-flexvolume-dir\") pod \"kube-controller-manager-ci-4426.0.0-n-3e4141976f\" (UID: \"7151436c23dcb766beecebfeb5b51de9\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-n-3e4141976f" Sep 9 23:43:15.007102 kubelet[3373]: I0909 23:43:15.007093 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4699429a6c144941c0f94f28c22636d7-k8s-certs\") pod \"kube-apiserver-ci-4426.0.0-n-3e4141976f\" (UID: \"4699429a6c144941c0f94f28c22636d7\") " pod="kube-system/kube-apiserver-ci-4426.0.0-n-3e4141976f" Sep 9 23:43:15.007164 kubelet[3373]: I0909 23:43:15.007154 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4699429a6c144941c0f94f28c22636d7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4426.0.0-n-3e4141976f\" (UID: \"4699429a6c144941c0f94f28c22636d7\") " pod="kube-system/kube-apiserver-ci-4426.0.0-n-3e4141976f" Sep 9 23:43:15.007249 kubelet[3373]: I0909 23:43:15.007237 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7151436c23dcb766beecebfeb5b51de9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4426.0.0-n-3e4141976f\" (UID: \"7151436c23dcb766beecebfeb5b51de9\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-n-3e4141976f" Sep 9 23:43:15.267492 sudo[3403]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 9 23:43:15.267727 sudo[3403]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 9 23:43:15.518244 sudo[3403]: pam_unix(sudo:session): session closed for user root Sep 9 23:43:15.795613 kubelet[3373]: I0909 23:43:15.795403 3373 apiserver.go:52] "Watching apiserver" Sep 9 23:43:15.804906 kubelet[3373]: I0909 23:43:15.804863 3373 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 23:43:15.862637 kubelet[3373]: W0909 23:43:15.862593 3373 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 23:43:15.862787 kubelet[3373]: E0909 23:43:15.862662 3373 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4426.0.0-n-3e4141976f\" already exists" pod="kube-system/kube-apiserver-ci-4426.0.0-n-3e4141976f" Sep 9 23:43:15.916633 kubelet[3373]: I0909 23:43:15.916491 3373 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4426.0.0-n-3e4141976f" podStartSLOduration=1.916473173 podStartE2EDuration="1.916473173s" podCreationTimestamp="2025-09-09 23:43:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:43:15.89648771 +0000 UTC m=+1.247080057" watchObservedRunningTime="2025-09-09 23:43:15.916473173 +0000 UTC m=+1.267065544" Sep 9 23:43:15.931763 kubelet[3373]: I0909 23:43:15.931624 3373 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4426.0.0-n-3e4141976f" podStartSLOduration=1.9316066539999999 podStartE2EDuration="1.931606654s" podCreationTimestamp="2025-09-09 23:43:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:43:15.916439068 +0000 UTC m=+1.267031415" watchObservedRunningTime="2025-09-09 23:43:15.931606654 +0000 UTC m=+1.282199001" Sep 9 23:43:15.945823 kubelet[3373]: I0909 23:43:15.945737 3373 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4426.0.0-n-3e4141976f" podStartSLOduration=1.9456823170000002 podStartE2EDuration="1.945682317s" podCreationTimestamp="2025-09-09 23:43:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:43:15.932329063 +0000 UTC m=+1.282921418" watchObservedRunningTime="2025-09-09 23:43:15.945682317 +0000 UTC m=+1.296274672" Sep 9 23:43:16.936051 sudo[2371]: pam_unix(sudo:session): session closed for user root Sep 9 23:43:17.022833 sshd[2370]: Connection closed by 10.200.16.10 port 57598 Sep 9 23:43:17.023035 sshd-session[2367]: pam_unix(sshd:session): session closed for user core Sep 9 23:43:17.026764 systemd[1]: sshd@6-10.200.20.12:22-10.200.16.10:57598.service: Deactivated successfully. Sep 9 23:43:17.029373 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 23:43:17.029644 systemd[1]: session-9.scope: Consumed 2.831s CPU time, 258M memory peak. Sep 9 23:43:17.032439 systemd-logind[1852]: Session 9 logged out. Waiting for processes to exit. Sep 9 23:43:17.033329 systemd-logind[1852]: Removed session 9. Sep 9 23:43:19.943742 kubelet[3373]: I0909 23:43:19.943704 3373 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 23:43:19.944667 kubelet[3373]: I0909 23:43:19.944612 3373 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 23:43:19.944726 containerd[1869]: time="2025-09-09T23:43:19.944450063Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 23:43:20.820789 kubelet[3373]: W0909 23:43:20.820553 3373 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4426.0.0-n-3e4141976f" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4426.0.0-n-3e4141976f' and this object Sep 9 23:43:20.820789 kubelet[3373]: E0909 23:43:20.820603 3373 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ci-4426.0.0-n-3e4141976f\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4426.0.0-n-3e4141976f' and this object" logger="UnhandledError" Sep 9 23:43:20.821349 kubelet[3373]: W0909 23:43:20.821296 3373 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4426.0.0-n-3e4141976f" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4426.0.0-n-3e4141976f' and this object Sep 9 23:43:20.821349 kubelet[3373]: E0909 23:43:20.821327 3373 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4426.0.0-n-3e4141976f\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4426.0.0-n-3e4141976f' and this object" logger="UnhandledError" Sep 9 23:43:20.823571 systemd[1]: Created slice kubepods-besteffort-pod1256099e_5f75_4986_9848_f4a32bf2edda.slice - libcontainer container kubepods-besteffort-pod1256099e_5f75_4986_9848_f4a32bf2edda.slice. Sep 9 23:43:20.838585 systemd[1]: Created slice kubepods-burstable-pod82c5441b_f90f_4269_a357_5ae7931672ba.slice - libcontainer container kubepods-burstable-pod82c5441b_f90f_4269_a357_5ae7931672ba.slice. Sep 9 23:43:20.840393 kubelet[3373]: I0909 23:43:20.840360 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/82c5441b-f90f-4269-a357-5ae7931672ba-clustermesh-secrets\") pod \"cilium-qv8fg\" (UID: \"82c5441b-f90f-4269-a357-5ae7931672ba\") " pod="kube-system/cilium-qv8fg" Sep 9 23:43:20.840859 kubelet[3373]: I0909 23:43:20.840561 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/82c5441b-f90f-4269-a357-5ae7931672ba-host-proc-sys-net\") pod \"cilium-qv8fg\" (UID: \"82c5441b-f90f-4269-a357-5ae7931672ba\") " pod="kube-system/cilium-qv8fg" Sep 9 23:43:20.840859 kubelet[3373]: I0909 23:43:20.840586 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/82c5441b-f90f-4269-a357-5ae7931672ba-hostproc\") pod \"cilium-qv8fg\" (UID: \"82c5441b-f90f-4269-a357-5ae7931672ba\") " pod="kube-system/cilium-qv8fg" Sep 9 23:43:20.840859 kubelet[3373]: I0909 23:43:20.840599 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/82c5441b-f90f-4269-a357-5ae7931672ba-cilium-config-path\") pod \"cilium-qv8fg\" (UID: \"82c5441b-f90f-4269-a357-5ae7931672ba\") " pod="kube-system/cilium-qv8fg" Sep 9 23:43:20.841371 kubelet[3373]: I0909 23:43:20.841087 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1256099e-5f75-4986-9848-f4a32bf2edda-kube-proxy\") pod \"kube-proxy-tszzq\" (UID: \"1256099e-5f75-4986-9848-f4a32bf2edda\") " pod="kube-system/kube-proxy-tszzq" Sep 9 23:43:20.841371 kubelet[3373]: I0909 23:43:20.841115 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/82c5441b-f90f-4269-a357-5ae7931672ba-cilium-run\") pod \"cilium-qv8fg\" (UID: \"82c5441b-f90f-4269-a357-5ae7931672ba\") " pod="kube-system/cilium-qv8fg" Sep 9 23:43:20.841750 kubelet[3373]: I0909 23:43:20.841128 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh92l\" (UniqueName: \"kubernetes.io/projected/82c5441b-f90f-4269-a357-5ae7931672ba-kube-api-access-rh92l\") pod \"cilium-qv8fg\" (UID: \"82c5441b-f90f-4269-a357-5ae7931672ba\") " pod="kube-system/cilium-qv8fg" Sep 9 23:43:20.841750 kubelet[3373]: I0909 23:43:20.841553 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1256099e-5f75-4986-9848-f4a32bf2edda-xtables-lock\") pod \"kube-proxy-tszzq\" (UID: \"1256099e-5f75-4986-9848-f4a32bf2edda\") " pod="kube-system/kube-proxy-tszzq" Sep 9 23:43:20.841750 kubelet[3373]: I0909 23:43:20.841566 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/82c5441b-f90f-4269-a357-5ae7931672ba-bpf-maps\") pod \"cilium-qv8fg\" (UID: \"82c5441b-f90f-4269-a357-5ae7931672ba\") " pod="kube-system/cilium-qv8fg" Sep 9 23:43:20.841750 kubelet[3373]: I0909 23:43:20.841577 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/82c5441b-f90f-4269-a357-5ae7931672ba-lib-modules\") pod \"cilium-qv8fg\" (UID: \"82c5441b-f90f-4269-a357-5ae7931672ba\") " pod="kube-system/cilium-qv8fg" Sep 9 23:43:20.841750 kubelet[3373]: I0909 23:43:20.841597 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/82c5441b-f90f-4269-a357-5ae7931672ba-host-proc-sys-kernel\") pod \"cilium-qv8fg\" (UID: \"82c5441b-f90f-4269-a357-5ae7931672ba\") " pod="kube-system/cilium-qv8fg" Sep 9 23:43:20.841902 kubelet[3373]: I0909 23:43:20.841607 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcdqm\" (UniqueName: \"kubernetes.io/projected/1256099e-5f75-4986-9848-f4a32bf2edda-kube-api-access-qcdqm\") pod \"kube-proxy-tszzq\" (UID: \"1256099e-5f75-4986-9848-f4a32bf2edda\") " pod="kube-system/kube-proxy-tszzq" Sep 9 23:43:20.841902 kubelet[3373]: I0909 23:43:20.841617 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/82c5441b-f90f-4269-a357-5ae7931672ba-etc-cni-netd\") pod \"cilium-qv8fg\" (UID: \"82c5441b-f90f-4269-a357-5ae7931672ba\") " pod="kube-system/cilium-qv8fg" Sep 9 23:43:20.841902 kubelet[3373]: I0909 23:43:20.841628 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/82c5441b-f90f-4269-a357-5ae7931672ba-hubble-tls\") pod \"cilium-qv8fg\" (UID: \"82c5441b-f90f-4269-a357-5ae7931672ba\") " pod="kube-system/cilium-qv8fg" Sep 9 23:43:20.841902 kubelet[3373]: I0909 23:43:20.841641 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/82c5441b-f90f-4269-a357-5ae7931672ba-xtables-lock\") pod \"cilium-qv8fg\" (UID: \"82c5441b-f90f-4269-a357-5ae7931672ba\") " pod="kube-system/cilium-qv8fg" Sep 9 23:43:20.842906 kubelet[3373]: I0909 23:43:20.841659 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/82c5441b-f90f-4269-a357-5ae7931672ba-cilium-cgroup\") pod \"cilium-qv8fg\" (UID: \"82c5441b-f90f-4269-a357-5ae7931672ba\") " pod="kube-system/cilium-qv8fg" Sep 9 23:43:20.842906 kubelet[3373]: I0909 23:43:20.842130 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1256099e-5f75-4986-9848-f4a32bf2edda-lib-modules\") pod \"kube-proxy-tszzq\" (UID: \"1256099e-5f75-4986-9848-f4a32bf2edda\") " pod="kube-system/kube-proxy-tszzq" Sep 9 23:43:20.842906 kubelet[3373]: I0909 23:43:20.842236 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/82c5441b-f90f-4269-a357-5ae7931672ba-cni-path\") pod \"cilium-qv8fg\" (UID: \"82c5441b-f90f-4269-a357-5ae7931672ba\") " pod="kube-system/cilium-qv8fg" Sep 9 23:43:21.235115 systemd[1]: Created slice kubepods-besteffort-pod4367c7c1_a27c_48d4_84f1_e5c3ca0735ae.slice - libcontainer container kubepods-besteffort-pod4367c7c1_a27c_48d4_84f1_e5c3ca0735ae.slice. Sep 9 23:43:21.245130 kubelet[3373]: I0909 23:43:21.245079 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg78p\" (UniqueName: \"kubernetes.io/projected/4367c7c1-a27c-48d4-84f1-e5c3ca0735ae-kube-api-access-vg78p\") pod \"cilium-operator-5d85765b45-4t9rd\" (UID: \"4367c7c1-a27c-48d4-84f1-e5c3ca0735ae\") " pod="kube-system/cilium-operator-5d85765b45-4t9rd" Sep 9 23:43:21.245130 kubelet[3373]: I0909 23:43:21.245137 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4367c7c1-a27c-48d4-84f1-e5c3ca0735ae-cilium-config-path\") pod \"cilium-operator-5d85765b45-4t9rd\" (UID: \"4367c7c1-a27c-48d4-84f1-e5c3ca0735ae\") " pod="kube-system/cilium-operator-5d85765b45-4t9rd" Sep 9 23:43:21.950470 kubelet[3373]: E0909 23:43:21.950394 3373 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Sep 9 23:43:21.951879 kubelet[3373]: E0909 23:43:21.950705 3373 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1256099e-5f75-4986-9848-f4a32bf2edda-kube-proxy podName:1256099e-5f75-4986-9848-f4a32bf2edda nodeName:}" failed. No retries permitted until 2025-09-09 23:43:22.45067679 +0000 UTC m=+7.801269145 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/1256099e-5f75-4986-9848-f4a32bf2edda-kube-proxy") pod "kube-proxy-tszzq" (UID: "1256099e-5f75-4986-9848-f4a32bf2edda") : failed to sync configmap cache: timed out waiting for the condition Sep 9 23:43:22.045745 containerd[1869]: time="2025-09-09T23:43:22.045691480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qv8fg,Uid:82c5441b-f90f-4269-a357-5ae7931672ba,Namespace:kube-system,Attempt:0,}" Sep 9 23:43:22.084105 containerd[1869]: time="2025-09-09T23:43:22.084054659Z" level=info msg="connecting to shim da03f03840ad556630d2da975c0de70a75e4018fdbde2480e417120a1af87e61" address="unix:///run/containerd/s/f7c32d19c877e6ac99feec08f95128e86bcf3b66033df2da743ddd9aa5489777" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:43:22.106057 systemd[1]: Started cri-containerd-da03f03840ad556630d2da975c0de70a75e4018fdbde2480e417120a1af87e61.scope - libcontainer container da03f03840ad556630d2da975c0de70a75e4018fdbde2480e417120a1af87e61. Sep 9 23:43:22.130515 containerd[1869]: time="2025-09-09T23:43:22.130473975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qv8fg,Uid:82c5441b-f90f-4269-a357-5ae7931672ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"da03f03840ad556630d2da975c0de70a75e4018fdbde2480e417120a1af87e61\"" Sep 9 23:43:22.132740 containerd[1869]: time="2025-09-09T23:43:22.132672625Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 9 23:43:22.141449 containerd[1869]: time="2025-09-09T23:43:22.141408390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-4t9rd,Uid:4367c7c1-a27c-48d4-84f1-e5c3ca0735ae,Namespace:kube-system,Attempt:0,}" Sep 9 23:43:22.185441 containerd[1869]: time="2025-09-09T23:43:22.185380665Z" level=info msg="connecting to shim a30bef708c7a11d138b049af781024f9742de1c22866e33707f314e996e5ac93" address="unix:///run/containerd/s/7b620a77cc5b1b07b64845055185c5a77a79c16b0a3099fa2f7485a1382a7fc9" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:43:22.209320 systemd[1]: Started cri-containerd-a30bef708c7a11d138b049af781024f9742de1c22866e33707f314e996e5ac93.scope - libcontainer container a30bef708c7a11d138b049af781024f9742de1c22866e33707f314e996e5ac93. Sep 9 23:43:22.241634 containerd[1869]: time="2025-09-09T23:43:22.241585313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-4t9rd,Uid:4367c7c1-a27c-48d4-84f1-e5c3ca0735ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"a30bef708c7a11d138b049af781024f9742de1c22866e33707f314e996e5ac93\"" Sep 9 23:43:22.635128 containerd[1869]: time="2025-09-09T23:43:22.634837073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tszzq,Uid:1256099e-5f75-4986-9848-f4a32bf2edda,Namespace:kube-system,Attempt:0,}" Sep 9 23:43:22.688259 containerd[1869]: time="2025-09-09T23:43:22.688183340Z" level=info msg="connecting to shim 08c826e8f46034b0365d6773fa5086c05eb312d7d2202ab7d4bcc048324d0f4a" address="unix:///run/containerd/s/eb21f1c6cf84a5eb7f2dc14a3f8946a276ff34ade6d80c4ba91eeafad4c831fa" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:43:22.713010 systemd[1]: Started cri-containerd-08c826e8f46034b0365d6773fa5086c05eb312d7d2202ab7d4bcc048324d0f4a.scope - libcontainer container 08c826e8f46034b0365d6773fa5086c05eb312d7d2202ab7d4bcc048324d0f4a. Sep 9 23:43:22.739698 containerd[1869]: time="2025-09-09T23:43:22.739637199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tszzq,Uid:1256099e-5f75-4986-9848-f4a32bf2edda,Namespace:kube-system,Attempt:0,} returns sandbox id \"08c826e8f46034b0365d6773fa5086c05eb312d7d2202ab7d4bcc048324d0f4a\"" Sep 9 23:43:22.743247 containerd[1869]: time="2025-09-09T23:43:22.743212033Z" level=info msg="CreateContainer within sandbox \"08c826e8f46034b0365d6773fa5086c05eb312d7d2202ab7d4bcc048324d0f4a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 23:43:22.766717 containerd[1869]: time="2025-09-09T23:43:22.766667103Z" level=info msg="Container 78e1da46d411b1ea2cb8e1242b012fb65df437a22af6112c67a2d26e1cbedcf8: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:43:22.788551 containerd[1869]: time="2025-09-09T23:43:22.788490907Z" level=info msg="CreateContainer within sandbox \"08c826e8f46034b0365d6773fa5086c05eb312d7d2202ab7d4bcc048324d0f4a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"78e1da46d411b1ea2cb8e1242b012fb65df437a22af6112c67a2d26e1cbedcf8\"" Sep 9 23:43:22.789630 containerd[1869]: time="2025-09-09T23:43:22.789605573Z" level=info msg="StartContainer for \"78e1da46d411b1ea2cb8e1242b012fb65df437a22af6112c67a2d26e1cbedcf8\"" Sep 9 23:43:22.791133 containerd[1869]: time="2025-09-09T23:43:22.791087657Z" level=info msg="connecting to shim 78e1da46d411b1ea2cb8e1242b012fb65df437a22af6112c67a2d26e1cbedcf8" address="unix:///run/containerd/s/eb21f1c6cf84a5eb7f2dc14a3f8946a276ff34ade6d80c4ba91eeafad4c831fa" protocol=ttrpc version=3 Sep 9 23:43:22.808986 systemd[1]: Started cri-containerd-78e1da46d411b1ea2cb8e1242b012fb65df437a22af6112c67a2d26e1cbedcf8.scope - libcontainer container 78e1da46d411b1ea2cb8e1242b012fb65df437a22af6112c67a2d26e1cbedcf8. Sep 9 23:43:22.848141 containerd[1869]: time="2025-09-09T23:43:22.847943621Z" level=info msg="StartContainer for \"78e1da46d411b1ea2cb8e1242b012fb65df437a22af6112c67a2d26e1cbedcf8\" returns successfully" Sep 9 23:43:22.892160 kubelet[3373]: I0909 23:43:22.891485 3373 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tszzq" podStartSLOduration=2.891466163 podStartE2EDuration="2.891466163s" podCreationTimestamp="2025-09-09 23:43:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:43:22.875973075 +0000 UTC m=+8.226565422" watchObservedRunningTime="2025-09-09 23:43:22.891466163 +0000 UTC m=+8.242058510" Sep 9 23:43:28.020999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4080410717.mount: Deactivated successfully. Sep 9 23:43:29.559526 containerd[1869]: time="2025-09-09T23:43:29.559222131Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:29.562364 containerd[1869]: time="2025-09-09T23:43:29.562199332Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 9 23:43:29.565456 containerd[1869]: time="2025-09-09T23:43:29.565426492Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:29.566745 containerd[1869]: time="2025-09-09T23:43:29.566699699Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.433806659s" Sep 9 23:43:29.566745 containerd[1869]: time="2025-09-09T23:43:29.566742852Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 9 23:43:29.568590 containerd[1869]: time="2025-09-09T23:43:29.568559362Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 9 23:43:29.569483 containerd[1869]: time="2025-09-09T23:43:29.569438748Z" level=info msg="CreateContainer within sandbox \"da03f03840ad556630d2da975c0de70a75e4018fdbde2480e417120a1af87e61\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 23:43:29.592829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4026340718.mount: Deactivated successfully. Sep 9 23:43:29.595679 containerd[1869]: time="2025-09-09T23:43:29.593825190Z" level=info msg="Container 4c284800e427235e9d3209e80b5c2dfb08cf5c444dbacd9a9ea53f74ad52ecc0: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:43:29.610449 containerd[1869]: time="2025-09-09T23:43:29.610405885Z" level=info msg="CreateContainer within sandbox \"da03f03840ad556630d2da975c0de70a75e4018fdbde2480e417120a1af87e61\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4c284800e427235e9d3209e80b5c2dfb08cf5c444dbacd9a9ea53f74ad52ecc0\"" Sep 9 23:43:29.611186 containerd[1869]: time="2025-09-09T23:43:29.611162404Z" level=info msg="StartContainer for \"4c284800e427235e9d3209e80b5c2dfb08cf5c444dbacd9a9ea53f74ad52ecc0\"" Sep 9 23:43:29.612532 containerd[1869]: time="2025-09-09T23:43:29.612507964Z" level=info msg="connecting to shim 4c284800e427235e9d3209e80b5c2dfb08cf5c444dbacd9a9ea53f74ad52ecc0" address="unix:///run/containerd/s/f7c32d19c877e6ac99feec08f95128e86bcf3b66033df2da743ddd9aa5489777" protocol=ttrpc version=3 Sep 9 23:43:29.632979 systemd[1]: Started cri-containerd-4c284800e427235e9d3209e80b5c2dfb08cf5c444dbacd9a9ea53f74ad52ecc0.scope - libcontainer container 4c284800e427235e9d3209e80b5c2dfb08cf5c444dbacd9a9ea53f74ad52ecc0. Sep 9 23:43:29.663923 containerd[1869]: time="2025-09-09T23:43:29.663888885Z" level=info msg="StartContainer for \"4c284800e427235e9d3209e80b5c2dfb08cf5c444dbacd9a9ea53f74ad52ecc0\" returns successfully" Sep 9 23:43:29.666649 systemd[1]: cri-containerd-4c284800e427235e9d3209e80b5c2dfb08cf5c444dbacd9a9ea53f74ad52ecc0.scope: Deactivated successfully. Sep 9 23:43:29.671854 containerd[1869]: time="2025-09-09T23:43:29.671780537Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4c284800e427235e9d3209e80b5c2dfb08cf5c444dbacd9a9ea53f74ad52ecc0\" id:\"4c284800e427235e9d3209e80b5c2dfb08cf5c444dbacd9a9ea53f74ad52ecc0\" pid:3786 exited_at:{seconds:1757461409 nanos:670322805}" Sep 9 23:43:29.672064 containerd[1869]: time="2025-09-09T23:43:29.671817178Z" level=info msg="received exit event container_id:\"4c284800e427235e9d3209e80b5c2dfb08cf5c444dbacd9a9ea53f74ad52ecc0\" id:\"4c284800e427235e9d3209e80b5c2dfb08cf5c444dbacd9a9ea53f74ad52ecc0\" pid:3786 exited_at:{seconds:1757461409 nanos:670322805}" Sep 9 23:43:30.590039 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c284800e427235e9d3209e80b5c2dfb08cf5c444dbacd9a9ea53f74ad52ecc0-rootfs.mount: Deactivated successfully. Sep 9 23:43:31.890992 containerd[1869]: time="2025-09-09T23:43:31.890779727Z" level=info msg="CreateContainer within sandbox \"da03f03840ad556630d2da975c0de70a75e4018fdbde2480e417120a1af87e61\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 23:43:31.915362 containerd[1869]: time="2025-09-09T23:43:31.914968201Z" level=info msg="Container 991a49685d24497ebb8ab0f20e3e3bdad6df1944b33f2f82b350d20eb88480c3: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:43:31.929385 containerd[1869]: time="2025-09-09T23:43:31.929350537Z" level=info msg="CreateContainer within sandbox \"da03f03840ad556630d2da975c0de70a75e4018fdbde2480e417120a1af87e61\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"991a49685d24497ebb8ab0f20e3e3bdad6df1944b33f2f82b350d20eb88480c3\"" Sep 9 23:43:31.929976 containerd[1869]: time="2025-09-09T23:43:31.929950650Z" level=info msg="StartContainer for \"991a49685d24497ebb8ab0f20e3e3bdad6df1944b33f2f82b350d20eb88480c3\"" Sep 9 23:43:31.930616 containerd[1869]: time="2025-09-09T23:43:31.930591732Z" level=info msg="connecting to shim 991a49685d24497ebb8ab0f20e3e3bdad6df1944b33f2f82b350d20eb88480c3" address="unix:///run/containerd/s/f7c32d19c877e6ac99feec08f95128e86bcf3b66033df2da743ddd9aa5489777" protocol=ttrpc version=3 Sep 9 23:43:31.948921 systemd[1]: Started cri-containerd-991a49685d24497ebb8ab0f20e3e3bdad6df1944b33f2f82b350d20eb88480c3.scope - libcontainer container 991a49685d24497ebb8ab0f20e3e3bdad6df1944b33f2f82b350d20eb88480c3. Sep 9 23:43:31.976596 containerd[1869]: time="2025-09-09T23:43:31.976556035Z" level=info msg="StartContainer for \"991a49685d24497ebb8ab0f20e3e3bdad6df1944b33f2f82b350d20eb88480c3\" returns successfully" Sep 9 23:43:31.991350 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 23:43:31.991664 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:43:31.992166 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:43:31.996034 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:43:31.997154 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 23:43:31.998672 systemd[1]: cri-containerd-991a49685d24497ebb8ab0f20e3e3bdad6df1944b33f2f82b350d20eb88480c3.scope: Deactivated successfully. Sep 9 23:43:32.000959 containerd[1869]: time="2025-09-09T23:43:32.000924258Z" level=info msg="received exit event container_id:\"991a49685d24497ebb8ab0f20e3e3bdad6df1944b33f2f82b350d20eb88480c3\" id:\"991a49685d24497ebb8ab0f20e3e3bdad6df1944b33f2f82b350d20eb88480c3\" pid:3833 exited_at:{seconds:1757461412 nanos:500494}" Sep 9 23:43:32.001182 containerd[1869]: time="2025-09-09T23:43:32.001111272Z" level=info msg="TaskExit event in podsandbox handler container_id:\"991a49685d24497ebb8ab0f20e3e3bdad6df1944b33f2f82b350d20eb88480c3\" id:\"991a49685d24497ebb8ab0f20e3e3bdad6df1944b33f2f82b350d20eb88480c3\" pid:3833 exited_at:{seconds:1757461412 nanos:500494}" Sep 9 23:43:32.014520 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:43:32.899833 containerd[1869]: time="2025-09-09T23:43:32.898083521Z" level=info msg="CreateContainer within sandbox \"da03f03840ad556630d2da975c0de70a75e4018fdbde2480e417120a1af87e61\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 23:43:32.915965 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-991a49685d24497ebb8ab0f20e3e3bdad6df1944b33f2f82b350d20eb88480c3-rootfs.mount: Deactivated successfully. Sep 9 23:43:32.948427 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount775634215.mount: Deactivated successfully. Sep 9 23:43:32.951295 containerd[1869]: time="2025-09-09T23:43:32.951184430Z" level=info msg="Container 66ba5415ddd95e5866bd44330ca79d9f866d199a018c05571cb87a2688a8f56a: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:43:32.980642 containerd[1869]: time="2025-09-09T23:43:32.980556405Z" level=info msg="CreateContainer within sandbox \"da03f03840ad556630d2da975c0de70a75e4018fdbde2480e417120a1af87e61\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"66ba5415ddd95e5866bd44330ca79d9f866d199a018c05571cb87a2688a8f56a\"" Sep 9 23:43:32.981872 containerd[1869]: time="2025-09-09T23:43:32.981849043Z" level=info msg="StartContainer for \"66ba5415ddd95e5866bd44330ca79d9f866d199a018c05571cb87a2688a8f56a\"" Sep 9 23:43:32.985394 containerd[1869]: time="2025-09-09T23:43:32.985324327Z" level=info msg="connecting to shim 66ba5415ddd95e5866bd44330ca79d9f866d199a018c05571cb87a2688a8f56a" address="unix:///run/containerd/s/f7c32d19c877e6ac99feec08f95128e86bcf3b66033df2da743ddd9aa5489777" protocol=ttrpc version=3 Sep 9 23:43:33.012093 systemd[1]: Started cri-containerd-66ba5415ddd95e5866bd44330ca79d9f866d199a018c05571cb87a2688a8f56a.scope - libcontainer container 66ba5415ddd95e5866bd44330ca79d9f866d199a018c05571cb87a2688a8f56a. Sep 9 23:43:33.049104 systemd[1]: cri-containerd-66ba5415ddd95e5866bd44330ca79d9f866d199a018c05571cb87a2688a8f56a.scope: Deactivated successfully. Sep 9 23:43:33.051142 containerd[1869]: time="2025-09-09T23:43:33.051109250Z" level=info msg="received exit event container_id:\"66ba5415ddd95e5866bd44330ca79d9f866d199a018c05571cb87a2688a8f56a\" id:\"66ba5415ddd95e5866bd44330ca79d9f866d199a018c05571cb87a2688a8f56a\" pid:3891 exited_at:{seconds:1757461413 nanos:49826069}" Sep 9 23:43:33.051243 containerd[1869]: time="2025-09-09T23:43:33.051226445Z" level=info msg="TaskExit event in podsandbox handler container_id:\"66ba5415ddd95e5866bd44330ca79d9f866d199a018c05571cb87a2688a8f56a\" id:\"66ba5415ddd95e5866bd44330ca79d9f866d199a018c05571cb87a2688a8f56a\" pid:3891 exited_at:{seconds:1757461413 nanos:49826069}" Sep 9 23:43:33.051985 containerd[1869]: time="2025-09-09T23:43:33.051957970Z" level=info msg="StartContainer for \"66ba5415ddd95e5866bd44330ca79d9f866d199a018c05571cb87a2688a8f56a\" returns successfully" Sep 9 23:43:33.079790 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66ba5415ddd95e5866bd44330ca79d9f866d199a018c05571cb87a2688a8f56a-rootfs.mount: Deactivated successfully. Sep 9 23:43:33.327159 containerd[1869]: time="2025-09-09T23:43:33.327109208Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:33.331206 containerd[1869]: time="2025-09-09T23:43:33.331053914Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 9 23:43:33.334482 containerd[1869]: time="2025-09-09T23:43:33.334446876Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:33.335377 containerd[1869]: time="2025-09-09T23:43:33.335299812Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.766708169s" Sep 9 23:43:33.335483 containerd[1869]: time="2025-09-09T23:43:33.335465929Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 9 23:43:33.337936 containerd[1869]: time="2025-09-09T23:43:33.337899871Z" level=info msg="CreateContainer within sandbox \"a30bef708c7a11d138b049af781024f9742de1c22866e33707f314e996e5ac93\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 9 23:43:33.353609 containerd[1869]: time="2025-09-09T23:43:33.353565468Z" level=info msg="Container 59a25278876187d0af72a2ecac7d03f8d6b2d61ae88d9f664003c9dfc2b699e4: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:43:33.371989 containerd[1869]: time="2025-09-09T23:43:33.371873732Z" level=info msg="CreateContainer within sandbox \"a30bef708c7a11d138b049af781024f9742de1c22866e33707f314e996e5ac93\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"59a25278876187d0af72a2ecac7d03f8d6b2d61ae88d9f664003c9dfc2b699e4\"" Sep 9 23:43:33.373814 containerd[1869]: time="2025-09-09T23:43:33.372675739Z" level=info msg="StartContainer for \"59a25278876187d0af72a2ecac7d03f8d6b2d61ae88d9f664003c9dfc2b699e4\"" Sep 9 23:43:33.374758 containerd[1869]: time="2025-09-09T23:43:33.374694661Z" level=info msg="connecting to shim 59a25278876187d0af72a2ecac7d03f8d6b2d61ae88d9f664003c9dfc2b699e4" address="unix:///run/containerd/s/7b620a77cc5b1b07b64845055185c5a77a79c16b0a3099fa2f7485a1382a7fc9" protocol=ttrpc version=3 Sep 9 23:43:33.393952 systemd[1]: Started cri-containerd-59a25278876187d0af72a2ecac7d03f8d6b2d61ae88d9f664003c9dfc2b699e4.scope - libcontainer container 59a25278876187d0af72a2ecac7d03f8d6b2d61ae88d9f664003c9dfc2b699e4. Sep 9 23:43:33.426226 containerd[1869]: time="2025-09-09T23:43:33.426191284Z" level=info msg="StartContainer for \"59a25278876187d0af72a2ecac7d03f8d6b2d61ae88d9f664003c9dfc2b699e4\" returns successfully" Sep 9 23:43:33.901034 containerd[1869]: time="2025-09-09T23:43:33.900962075Z" level=info msg="CreateContainer within sandbox \"da03f03840ad556630d2da975c0de70a75e4018fdbde2480e417120a1af87e61\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 23:43:33.929361 containerd[1869]: time="2025-09-09T23:43:33.928596177Z" level=info msg="Container e3679c3349d42e4f8820de43285f73d3adc6e68248b2c342e5ce61004e827e0c: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:43:33.951167 containerd[1869]: time="2025-09-09T23:43:33.951123419Z" level=info msg="CreateContainer within sandbox \"da03f03840ad556630d2da975c0de70a75e4018fdbde2480e417120a1af87e61\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e3679c3349d42e4f8820de43285f73d3adc6e68248b2c342e5ce61004e827e0c\"" Sep 9 23:43:33.951757 containerd[1869]: time="2025-09-09T23:43:33.951734589Z" level=info msg="StartContainer for \"e3679c3349d42e4f8820de43285f73d3adc6e68248b2c342e5ce61004e827e0c\"" Sep 9 23:43:33.952441 containerd[1869]: time="2025-09-09T23:43:33.952418576Z" level=info msg="connecting to shim e3679c3349d42e4f8820de43285f73d3adc6e68248b2c342e5ce61004e827e0c" address="unix:///run/containerd/s/f7c32d19c877e6ac99feec08f95128e86bcf3b66033df2da743ddd9aa5489777" protocol=ttrpc version=3 Sep 9 23:43:33.983978 systemd[1]: Started cri-containerd-e3679c3349d42e4f8820de43285f73d3adc6e68248b2c342e5ce61004e827e0c.scope - libcontainer container e3679c3349d42e4f8820de43285f73d3adc6e68248b2c342e5ce61004e827e0c. Sep 9 23:43:34.044983 systemd[1]: cri-containerd-e3679c3349d42e4f8820de43285f73d3adc6e68248b2c342e5ce61004e827e0c.scope: Deactivated successfully. Sep 9 23:43:34.049095 containerd[1869]: time="2025-09-09T23:43:34.049051862Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e3679c3349d42e4f8820de43285f73d3adc6e68248b2c342e5ce61004e827e0c\" id:\"e3679c3349d42e4f8820de43285f73d3adc6e68248b2c342e5ce61004e827e0c\" pid:3967 exited_at:{seconds:1757461414 nanos:47585507}" Sep 9 23:43:34.049498 containerd[1869]: time="2025-09-09T23:43:34.049469754Z" level=info msg="received exit event container_id:\"e3679c3349d42e4f8820de43285f73d3adc6e68248b2c342e5ce61004e827e0c\" id:\"e3679c3349d42e4f8820de43285f73d3adc6e68248b2c342e5ce61004e827e0c\" pid:3967 exited_at:{seconds:1757461414 nanos:47585507}" Sep 9 23:43:34.062382 containerd[1869]: time="2025-09-09T23:43:34.062199033Z" level=info msg="StartContainer for \"e3679c3349d42e4f8820de43285f73d3adc6e68248b2c342e5ce61004e827e0c\" returns successfully" Sep 9 23:43:34.912892 containerd[1869]: time="2025-09-09T23:43:34.912842377Z" level=info msg="CreateContainer within sandbox \"da03f03840ad556630d2da975c0de70a75e4018fdbde2480e417120a1af87e61\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 23:43:34.915235 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3679c3349d42e4f8820de43285f73d3adc6e68248b2c342e5ce61004e827e0c-rootfs.mount: Deactivated successfully. Sep 9 23:43:34.934403 kubelet[3373]: I0909 23:43:34.934319 3373 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-4t9rd" podStartSLOduration=2.841065704 podStartE2EDuration="13.934301516s" podCreationTimestamp="2025-09-09 23:43:21 +0000 UTC" firstStartedPulling="2025-09-09 23:43:22.242987891 +0000 UTC m=+7.593580238" lastFinishedPulling="2025-09-09 23:43:33.336223703 +0000 UTC m=+18.686816050" observedRunningTime="2025-09-09 23:43:33.988079998 +0000 UTC m=+19.338672369" watchObservedRunningTime="2025-09-09 23:43:34.934301516 +0000 UTC m=+20.284893863" Sep 9 23:43:34.950693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2980856050.mount: Deactivated successfully. Sep 9 23:43:34.954510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3227398432.mount: Deactivated successfully. Sep 9 23:43:34.955722 containerd[1869]: time="2025-09-09T23:43:34.955074260Z" level=info msg="Container a5288a078629a72c83a1ce9d446dc428229c314f7eab9522f592a40dd1e4d62b: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:43:34.973833 containerd[1869]: time="2025-09-09T23:43:34.973693189Z" level=info msg="CreateContainer within sandbox \"da03f03840ad556630d2da975c0de70a75e4018fdbde2480e417120a1af87e61\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a5288a078629a72c83a1ce9d446dc428229c314f7eab9522f592a40dd1e4d62b\"" Sep 9 23:43:34.974427 containerd[1869]: time="2025-09-09T23:43:34.974374689Z" level=info msg="StartContainer for \"a5288a078629a72c83a1ce9d446dc428229c314f7eab9522f592a40dd1e4d62b\"" Sep 9 23:43:34.975559 containerd[1869]: time="2025-09-09T23:43:34.975514490Z" level=info msg="connecting to shim a5288a078629a72c83a1ce9d446dc428229c314f7eab9522f592a40dd1e4d62b" address="unix:///run/containerd/s/f7c32d19c877e6ac99feec08f95128e86bcf3b66033df2da743ddd9aa5489777" protocol=ttrpc version=3 Sep 9 23:43:34.993059 systemd[1]: Started cri-containerd-a5288a078629a72c83a1ce9d446dc428229c314f7eab9522f592a40dd1e4d62b.scope - libcontainer container a5288a078629a72c83a1ce9d446dc428229c314f7eab9522f592a40dd1e4d62b. Sep 9 23:43:35.028550 containerd[1869]: time="2025-09-09T23:43:35.028437753Z" level=info msg="StartContainer for \"a5288a078629a72c83a1ce9d446dc428229c314f7eab9522f592a40dd1e4d62b\" returns successfully" Sep 9 23:43:35.111375 containerd[1869]: time="2025-09-09T23:43:35.111334114Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5288a078629a72c83a1ce9d446dc428229c314f7eab9522f592a40dd1e4d62b\" id:\"ac47f649e4358bb1b57250f91b136a7dd94136e2dbb833ea406f44587596965e\" pid:4037 exited_at:{seconds:1757461415 nanos:111069074}" Sep 9 23:43:35.203680 kubelet[3373]: I0909 23:43:35.203574 3373 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 9 23:43:35.247534 systemd[1]: Created slice kubepods-burstable-pod990195bc_6747_4349_bf50_642d52ea8397.slice - libcontainer container kubepods-burstable-pod990195bc_6747_4349_bf50_642d52ea8397.slice. Sep 9 23:43:35.254411 systemd[1]: Created slice kubepods-burstable-pod352cde14_6451_4036_ac50_d50fe907e031.slice - libcontainer container kubepods-burstable-pod352cde14_6451_4036_ac50_d50fe907e031.slice. Sep 9 23:43:35.427165 kubelet[3373]: I0909 23:43:35.427125 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/990195bc-6747-4349-bf50-642d52ea8397-config-volume\") pod \"coredns-7c65d6cfc9-zz54s\" (UID: \"990195bc-6747-4349-bf50-642d52ea8397\") " pod="kube-system/coredns-7c65d6cfc9-zz54s" Sep 9 23:43:35.427444 kubelet[3373]: I0909 23:43:35.427366 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tr2s\" (UniqueName: \"kubernetes.io/projected/990195bc-6747-4349-bf50-642d52ea8397-kube-api-access-7tr2s\") pod \"coredns-7c65d6cfc9-zz54s\" (UID: \"990195bc-6747-4349-bf50-642d52ea8397\") " pod="kube-system/coredns-7c65d6cfc9-zz54s" Sep 9 23:43:35.427444 kubelet[3373]: I0909 23:43:35.427386 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9lpp\" (UniqueName: \"kubernetes.io/projected/352cde14-6451-4036-ac50-d50fe907e031-kube-api-access-t9lpp\") pod \"coredns-7c65d6cfc9-68zg7\" (UID: \"352cde14-6451-4036-ac50-d50fe907e031\") " pod="kube-system/coredns-7c65d6cfc9-68zg7" Sep 9 23:43:35.427444 kubelet[3373]: I0909 23:43:35.427399 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/352cde14-6451-4036-ac50-d50fe907e031-config-volume\") pod \"coredns-7c65d6cfc9-68zg7\" (UID: \"352cde14-6451-4036-ac50-d50fe907e031\") " pod="kube-system/coredns-7c65d6cfc9-68zg7" Sep 9 23:43:35.554074 containerd[1869]: time="2025-09-09T23:43:35.554032188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zz54s,Uid:990195bc-6747-4349-bf50-642d52ea8397,Namespace:kube-system,Attempt:0,}" Sep 9 23:43:35.557789 containerd[1869]: time="2025-09-09T23:43:35.557753719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-68zg7,Uid:352cde14-6451-4036-ac50-d50fe907e031,Namespace:kube-system,Attempt:0,}" Sep 9 23:43:35.935851 kubelet[3373]: I0909 23:43:35.935669 3373 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qv8fg" podStartSLOduration=8.499908732 podStartE2EDuration="15.935649786s" podCreationTimestamp="2025-09-09 23:43:20 +0000 UTC" firstStartedPulling="2025-09-09 23:43:22.132001356 +0000 UTC m=+7.482593703" lastFinishedPulling="2025-09-09 23:43:29.56774241 +0000 UTC m=+14.918334757" observedRunningTime="2025-09-09 23:43:35.934998791 +0000 UTC m=+21.285591138" watchObservedRunningTime="2025-09-09 23:43:35.935649786 +0000 UTC m=+21.286242133" Sep 9 23:43:37.291046 systemd-networkd[1687]: cilium_host: Link UP Sep 9 23:43:37.291136 systemd-networkd[1687]: cilium_net: Link UP Sep 9 23:43:37.291226 systemd-networkd[1687]: cilium_net: Gained carrier Sep 9 23:43:37.291299 systemd-networkd[1687]: cilium_host: Gained carrier Sep 9 23:43:37.462024 systemd-networkd[1687]: cilium_vxlan: Link UP Sep 9 23:43:37.462036 systemd-networkd[1687]: cilium_vxlan: Gained carrier Sep 9 23:43:37.527000 systemd-networkd[1687]: cilium_host: Gained IPv6LL Sep 9 23:43:37.720972 kernel: NET: Registered PF_ALG protocol family Sep 9 23:43:37.934969 systemd-networkd[1687]: cilium_net: Gained IPv6LL Sep 9 23:43:38.341858 systemd-networkd[1687]: lxc_health: Link UP Sep 9 23:43:38.342234 systemd-networkd[1687]: lxc_health: Gained carrier Sep 9 23:43:38.640109 kernel: eth0: renamed from tmp4547c Sep 9 23:43:38.640468 kernel: eth0: renamed from tmp7a437 Sep 9 23:43:38.643535 systemd-networkd[1687]: lxc55e727da5c37: Link UP Sep 9 23:43:38.644188 systemd-networkd[1687]: lxc0ec15aac51ea: Link UP Sep 9 23:43:38.644323 systemd-networkd[1687]: lxc55e727da5c37: Gained carrier Sep 9 23:43:38.650587 systemd-networkd[1687]: lxc0ec15aac51ea: Gained carrier Sep 9 23:43:39.152009 systemd-networkd[1687]: cilium_vxlan: Gained IPv6LL Sep 9 23:43:39.791025 systemd-networkd[1687]: lxc55e727da5c37: Gained IPv6LL Sep 9 23:43:40.304013 systemd-networkd[1687]: lxc_health: Gained IPv6LL Sep 9 23:43:40.623055 systemd-networkd[1687]: lxc0ec15aac51ea: Gained IPv6LL Sep 9 23:43:41.348027 containerd[1869]: time="2025-09-09T23:43:41.347979503Z" level=info msg="connecting to shim 7a4373a5badfd43966ab50948fb3831faa0cafe43d0a3ca887c837292e135b17" address="unix:///run/containerd/s/6db67c3ab2729697d6b15a75f8e718dc75b28b1b7e3cfc08c90c29de00c2365b" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:43:41.357270 containerd[1869]: time="2025-09-09T23:43:41.357227160Z" level=info msg="connecting to shim 4547c08bb0ef9cd0e88cde5be9744ae4be7ca655ed6797947e4fa0a732bdd4d7" address="unix:///run/containerd/s/9c032ff0271c4c65a2755fa879bbc84ba5c517cbda4af34141bd0b8c4e7fcf25" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:43:41.372158 systemd[1]: Started cri-containerd-7a4373a5badfd43966ab50948fb3831faa0cafe43d0a3ca887c837292e135b17.scope - libcontainer container 7a4373a5badfd43966ab50948fb3831faa0cafe43d0a3ca887c837292e135b17. Sep 9 23:43:41.391120 systemd[1]: Started cri-containerd-4547c08bb0ef9cd0e88cde5be9744ae4be7ca655ed6797947e4fa0a732bdd4d7.scope - libcontainer container 4547c08bb0ef9cd0e88cde5be9744ae4be7ca655ed6797947e4fa0a732bdd4d7. Sep 9 23:43:41.429188 containerd[1869]: time="2025-09-09T23:43:41.429143747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-68zg7,Uid:352cde14-6451-4036-ac50-d50fe907e031,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a4373a5badfd43966ab50948fb3831faa0cafe43d0a3ca887c837292e135b17\"" Sep 9 23:43:41.433848 containerd[1869]: time="2025-09-09T23:43:41.433718598Z" level=info msg="CreateContainer within sandbox \"7a4373a5badfd43966ab50948fb3831faa0cafe43d0a3ca887c837292e135b17\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 23:43:41.444727 containerd[1869]: time="2025-09-09T23:43:41.444687481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zz54s,Uid:990195bc-6747-4349-bf50-642d52ea8397,Namespace:kube-system,Attempt:0,} returns sandbox id \"4547c08bb0ef9cd0e88cde5be9744ae4be7ca655ed6797947e4fa0a732bdd4d7\"" Sep 9 23:43:41.448318 containerd[1869]: time="2025-09-09T23:43:41.447996600Z" level=info msg="CreateContainer within sandbox \"4547c08bb0ef9cd0e88cde5be9744ae4be7ca655ed6797947e4fa0a732bdd4d7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 23:43:41.473925 containerd[1869]: time="2025-09-09T23:43:41.473875264Z" level=info msg="Container c0ba0a478c7465fe0de6255561d5cbe9bd0f07bf92eb17348b3edf2c3746a51a: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:43:41.482414 containerd[1869]: time="2025-09-09T23:43:41.481936447Z" level=info msg="Container 1f3a84c48c416eb54b8b2452c001557f4bd372cfa20f7ad0226d5ce676fe050e: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:43:41.505741 containerd[1869]: time="2025-09-09T23:43:41.505689994Z" level=info msg="CreateContainer within sandbox \"4547c08bb0ef9cd0e88cde5be9744ae4be7ca655ed6797947e4fa0a732bdd4d7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c0ba0a478c7465fe0de6255561d5cbe9bd0f07bf92eb17348b3edf2c3746a51a\"" Sep 9 23:43:41.506660 containerd[1869]: time="2025-09-09T23:43:41.506264290Z" level=info msg="StartContainer for \"c0ba0a478c7465fe0de6255561d5cbe9bd0f07bf92eb17348b3edf2c3746a51a\"" Sep 9 23:43:41.507683 containerd[1869]: time="2025-09-09T23:43:41.507372042Z" level=info msg="connecting to shim c0ba0a478c7465fe0de6255561d5cbe9bd0f07bf92eb17348b3edf2c3746a51a" address="unix:///run/containerd/s/9c032ff0271c4c65a2755fa879bbc84ba5c517cbda4af34141bd0b8c4e7fcf25" protocol=ttrpc version=3 Sep 9 23:43:41.509099 containerd[1869]: time="2025-09-09T23:43:41.509072579Z" level=info msg="CreateContainer within sandbox \"7a4373a5badfd43966ab50948fb3831faa0cafe43d0a3ca887c837292e135b17\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1f3a84c48c416eb54b8b2452c001557f4bd372cfa20f7ad0226d5ce676fe050e\"" Sep 9 23:43:41.509815 containerd[1869]: time="2025-09-09T23:43:41.509718285Z" level=info msg="StartContainer for \"1f3a84c48c416eb54b8b2452c001557f4bd372cfa20f7ad0226d5ce676fe050e\"" Sep 9 23:43:41.512573 containerd[1869]: time="2025-09-09T23:43:41.512482901Z" level=info msg="connecting to shim 1f3a84c48c416eb54b8b2452c001557f4bd372cfa20f7ad0226d5ce676fe050e" address="unix:///run/containerd/s/6db67c3ab2729697d6b15a75f8e718dc75b28b1b7e3cfc08c90c29de00c2365b" protocol=ttrpc version=3 Sep 9 23:43:41.526986 systemd[1]: Started cri-containerd-c0ba0a478c7465fe0de6255561d5cbe9bd0f07bf92eb17348b3edf2c3746a51a.scope - libcontainer container c0ba0a478c7465fe0de6255561d5cbe9bd0f07bf92eb17348b3edf2c3746a51a. Sep 9 23:43:41.529982 systemd[1]: Started cri-containerd-1f3a84c48c416eb54b8b2452c001557f4bd372cfa20f7ad0226d5ce676fe050e.scope - libcontainer container 1f3a84c48c416eb54b8b2452c001557f4bd372cfa20f7ad0226d5ce676fe050e. Sep 9 23:43:41.578518 containerd[1869]: time="2025-09-09T23:43:41.578471525Z" level=info msg="StartContainer for \"c0ba0a478c7465fe0de6255561d5cbe9bd0f07bf92eb17348b3edf2c3746a51a\" returns successfully" Sep 9 23:43:41.580747 containerd[1869]: time="2025-09-09T23:43:41.580681284Z" level=info msg="StartContainer for \"1f3a84c48c416eb54b8b2452c001557f4bd372cfa20f7ad0226d5ce676fe050e\" returns successfully" Sep 9 23:43:41.943681 kubelet[3373]: I0909 23:43:41.943612 3373 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-zz54s" podStartSLOduration=20.943386089 podStartE2EDuration="20.943386089s" podCreationTimestamp="2025-09-09 23:43:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:43:41.941021693 +0000 UTC m=+27.291614040" watchObservedRunningTime="2025-09-09 23:43:41.943386089 +0000 UTC m=+27.293978436" Sep 9 23:43:44.064013 kubelet[3373]: I0909 23:43:44.063840 3373 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 23:43:44.080075 kubelet[3373]: I0909 23:43:44.079846 3373 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-68zg7" podStartSLOduration=23.079789387 podStartE2EDuration="23.079789387s" podCreationTimestamp="2025-09-09 23:43:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:43:41.975017261 +0000 UTC m=+27.325609608" watchObservedRunningTime="2025-09-09 23:43:44.079789387 +0000 UTC m=+29.430381734" Sep 9 23:45:19.543239 systemd[1]: Started sshd@7-10.200.20.12:22-10.200.16.10:36464.service - OpenSSH per-connection server daemon (10.200.16.10:36464). Sep 9 23:45:20.002417 sshd[4696]: Accepted publickey for core from 10.200.16.10 port 36464 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:45:20.003466 sshd-session[4696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:20.009572 systemd-logind[1852]: New session 10 of user core. Sep 9 23:45:20.016928 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 23:45:20.400990 sshd[4699]: Connection closed by 10.200.16.10 port 36464 Sep 9 23:45:20.401975 sshd-session[4696]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:20.407709 systemd[1]: sshd@7-10.200.20.12:22-10.200.16.10:36464.service: Deactivated successfully. Sep 9 23:45:20.411702 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 23:45:20.413579 systemd-logind[1852]: Session 10 logged out. Waiting for processes to exit. Sep 9 23:45:20.416717 systemd-logind[1852]: Removed session 10. Sep 9 23:45:25.494456 systemd[1]: Started sshd@8-10.200.20.12:22-10.200.16.10:36364.service - OpenSSH per-connection server daemon (10.200.16.10:36364). Sep 9 23:45:25.995425 sshd[4713]: Accepted publickey for core from 10.200.16.10 port 36364 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:45:25.996559 sshd-session[4713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:26.000150 systemd-logind[1852]: New session 11 of user core. Sep 9 23:45:26.005952 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 23:45:26.406838 sshd[4716]: Connection closed by 10.200.16.10 port 36364 Sep 9 23:45:26.407361 sshd-session[4713]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:26.412534 systemd[1]: sshd@8-10.200.20.12:22-10.200.16.10:36364.service: Deactivated successfully. Sep 9 23:45:26.415994 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 23:45:26.418022 systemd-logind[1852]: Session 11 logged out. Waiting for processes to exit. Sep 9 23:45:26.419440 systemd-logind[1852]: Removed session 11. Sep 9 23:45:31.490037 systemd[1]: Started sshd@9-10.200.20.12:22-10.200.16.10:45780.service - OpenSSH per-connection server daemon (10.200.16.10:45780). Sep 9 23:45:31.938003 sshd[4729]: Accepted publickey for core from 10.200.16.10 port 45780 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:45:31.939150 sshd-session[4729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:31.943876 systemd-logind[1852]: New session 12 of user core. Sep 9 23:45:31.951940 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 23:45:32.305715 sshd[4732]: Connection closed by 10.200.16.10 port 45780 Sep 9 23:45:32.306137 sshd-session[4729]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:32.310129 systemd-logind[1852]: Session 12 logged out. Waiting for processes to exit. Sep 9 23:45:32.310437 systemd[1]: sshd@9-10.200.20.12:22-10.200.16.10:45780.service: Deactivated successfully. Sep 9 23:45:32.312349 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 23:45:32.314033 systemd-logind[1852]: Removed session 12. Sep 9 23:45:37.401815 systemd[1]: Started sshd@10-10.200.20.12:22-10.200.16.10:45796.service - OpenSSH per-connection server daemon (10.200.16.10:45796). Sep 9 23:45:37.894333 sshd[4745]: Accepted publickey for core from 10.200.16.10 port 45796 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:45:37.895350 sshd-session[4745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:37.899874 systemd-logind[1852]: New session 13 of user core. Sep 9 23:45:37.908968 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 23:45:38.290221 sshd[4748]: Connection closed by 10.200.16.10 port 45796 Sep 9 23:45:38.290884 sshd-session[4745]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:38.294382 systemd[1]: sshd@10-10.200.20.12:22-10.200.16.10:45796.service: Deactivated successfully. Sep 9 23:45:38.296222 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 23:45:38.297147 systemd-logind[1852]: Session 13 logged out. Waiting for processes to exit. Sep 9 23:45:38.298331 systemd-logind[1852]: Removed session 13. Sep 9 23:45:38.404415 systemd[1]: Started sshd@11-10.200.20.12:22-10.200.16.10:45810.service - OpenSSH per-connection server daemon (10.200.16.10:45810). Sep 9 23:45:38.859373 sshd[4760]: Accepted publickey for core from 10.200.16.10 port 45810 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:45:38.860514 sshd-session[4760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:38.864136 systemd-logind[1852]: New session 14 of user core. Sep 9 23:45:38.873937 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 23:45:39.276761 sshd[4763]: Connection closed by 10.200.16.10 port 45810 Sep 9 23:45:39.277378 sshd-session[4760]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:39.282150 systemd[1]: sshd@11-10.200.20.12:22-10.200.16.10:45810.service: Deactivated successfully. Sep 9 23:45:39.286258 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 23:45:39.287344 systemd-logind[1852]: Session 14 logged out. Waiting for processes to exit. Sep 9 23:45:39.288777 systemd-logind[1852]: Removed session 14. Sep 9 23:45:39.379027 systemd[1]: Started sshd@12-10.200.20.12:22-10.200.16.10:45822.service - OpenSSH per-connection server daemon (10.200.16.10:45822). Sep 9 23:45:39.847119 sshd[4772]: Accepted publickey for core from 10.200.16.10 port 45822 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:45:39.848280 sshd-session[4772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:39.852579 systemd-logind[1852]: New session 15 of user core. Sep 9 23:45:39.856946 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 23:45:40.232917 sshd[4775]: Connection closed by 10.200.16.10 port 45822 Sep 9 23:45:40.233511 sshd-session[4772]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:40.237595 systemd[1]: sshd@12-10.200.20.12:22-10.200.16.10:45822.service: Deactivated successfully. Sep 9 23:45:40.240037 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 23:45:40.240891 systemd-logind[1852]: Session 15 logged out. Waiting for processes to exit. Sep 9 23:45:40.242165 systemd-logind[1852]: Removed session 15. Sep 9 23:45:45.318043 systemd[1]: Started sshd@13-10.200.20.12:22-10.200.16.10:59054.service - OpenSSH per-connection server daemon (10.200.16.10:59054). Sep 9 23:45:45.782928 sshd[4787]: Accepted publickey for core from 10.200.16.10 port 59054 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:45:45.784083 sshd-session[4787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:45.789078 systemd-logind[1852]: New session 16 of user core. Sep 9 23:45:45.792919 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 23:45:46.170098 sshd[4790]: Connection closed by 10.200.16.10 port 59054 Sep 9 23:45:46.170969 sshd-session[4787]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:46.175669 systemd-logind[1852]: Session 16 logged out. Waiting for processes to exit. Sep 9 23:45:46.176304 systemd[1]: sshd@13-10.200.20.12:22-10.200.16.10:59054.service: Deactivated successfully. Sep 9 23:45:46.178505 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 23:45:46.180258 systemd-logind[1852]: Removed session 16. Sep 9 23:45:46.252367 systemd[1]: Started sshd@14-10.200.20.12:22-10.200.16.10:59070.service - OpenSSH per-connection server daemon (10.200.16.10:59070). Sep 9 23:45:46.711672 sshd[4801]: Accepted publickey for core from 10.200.16.10 port 59070 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:45:46.712764 sshd-session[4801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:46.716831 systemd-logind[1852]: New session 17 of user core. Sep 9 23:45:46.720933 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 23:45:47.168052 sshd[4804]: Connection closed by 10.200.16.10 port 59070 Sep 9 23:45:47.168680 sshd-session[4801]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:47.172564 systemd-logind[1852]: Session 17 logged out. Waiting for processes to exit. Sep 9 23:45:47.173188 systemd[1]: sshd@14-10.200.20.12:22-10.200.16.10:59070.service: Deactivated successfully. Sep 9 23:45:47.175283 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 23:45:47.177481 systemd-logind[1852]: Removed session 17. Sep 9 23:45:47.261148 systemd[1]: Started sshd@15-10.200.20.12:22-10.200.16.10:59080.service - OpenSSH per-connection server daemon (10.200.16.10:59080). Sep 9 23:45:47.756170 sshd[4813]: Accepted publickey for core from 10.200.16.10 port 59080 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:45:47.758988 sshd-session[4813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:47.767420 systemd-logind[1852]: New session 18 of user core. Sep 9 23:45:47.772911 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 23:45:48.829227 sshd[4816]: Connection closed by 10.200.16.10 port 59080 Sep 9 23:45:48.830112 sshd-session[4813]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:48.834276 systemd-logind[1852]: Session 18 logged out. Waiting for processes to exit. Sep 9 23:45:48.834582 systemd[1]: sshd@15-10.200.20.12:22-10.200.16.10:59080.service: Deactivated successfully. Sep 9 23:45:48.837527 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 23:45:48.839951 systemd-logind[1852]: Removed session 18. Sep 9 23:45:48.914564 systemd[1]: Started sshd@16-10.200.20.12:22-10.200.16.10:59090.service - OpenSSH per-connection server daemon (10.200.16.10:59090). Sep 9 23:45:49.368886 sshd[4834]: Accepted publickey for core from 10.200.16.10 port 59090 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:45:49.370014 sshd-session[4834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:49.374207 systemd-logind[1852]: New session 19 of user core. Sep 9 23:45:49.380919 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 23:45:49.808678 sshd[4837]: Connection closed by 10.200.16.10 port 59090 Sep 9 23:45:49.809355 sshd-session[4834]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:49.813213 systemd[1]: sshd@16-10.200.20.12:22-10.200.16.10:59090.service: Deactivated successfully. Sep 9 23:45:49.813471 systemd-logind[1852]: Session 19 logged out. Waiting for processes to exit. Sep 9 23:45:49.816308 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 23:45:49.818507 systemd-logind[1852]: Removed session 19. Sep 9 23:45:49.897102 systemd[1]: Started sshd@17-10.200.20.12:22-10.200.16.10:52570.service - OpenSSH per-connection server daemon (10.200.16.10:52570). Sep 9 23:45:50.348755 sshd[4846]: Accepted publickey for core from 10.200.16.10 port 52570 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:45:50.349890 sshd-session[4846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:50.353894 systemd-logind[1852]: New session 20 of user core. Sep 9 23:45:50.358934 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 23:45:50.740099 sshd[4849]: Connection closed by 10.200.16.10 port 52570 Sep 9 23:45:50.741154 sshd-session[4846]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:50.744965 systemd[1]: sshd@17-10.200.20.12:22-10.200.16.10:52570.service: Deactivated successfully. Sep 9 23:45:50.747156 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 23:45:50.749940 systemd-logind[1852]: Session 20 logged out. Waiting for processes to exit. Sep 9 23:45:50.752180 systemd-logind[1852]: Removed session 20. Sep 9 23:45:55.812659 systemd[1]: Started sshd@18-10.200.20.12:22-10.200.16.10:52574.service - OpenSSH per-connection server daemon (10.200.16.10:52574). Sep 9 23:45:56.236771 sshd[4866]: Accepted publickey for core from 10.200.16.10 port 52574 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:45:56.237743 sshd-session[4866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:56.241521 systemd-logind[1852]: New session 21 of user core. Sep 9 23:45:56.258960 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 23:45:56.593400 sshd[4869]: Connection closed by 10.200.16.10 port 52574 Sep 9 23:45:56.593234 sshd-session[4866]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:56.597026 systemd[1]: sshd@18-10.200.20.12:22-10.200.16.10:52574.service: Deactivated successfully. Sep 9 23:45:56.600632 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 23:45:56.605231 systemd-logind[1852]: Session 21 logged out. Waiting for processes to exit. Sep 9 23:45:56.606205 systemd-logind[1852]: Removed session 21. Sep 9 23:46:01.676597 systemd[1]: Started sshd@19-10.200.20.12:22-10.200.16.10:36580.service - OpenSSH per-connection server daemon (10.200.16.10:36580). Sep 9 23:46:02.086648 sshd[4880]: Accepted publickey for core from 10.200.16.10 port 36580 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:46:02.088069 sshd-session[4880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:02.092384 systemd-logind[1852]: New session 22 of user core. Sep 9 23:46:02.096941 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 23:46:02.447742 sshd[4883]: Connection closed by 10.200.16.10 port 36580 Sep 9 23:46:02.448303 sshd-session[4880]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:02.451699 systemd[1]: sshd@19-10.200.20.12:22-10.200.16.10:36580.service: Deactivated successfully. Sep 9 23:46:02.453559 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 23:46:02.454289 systemd-logind[1852]: Session 22 logged out. Waiting for processes to exit. Sep 9 23:46:02.455457 systemd-logind[1852]: Removed session 22. Sep 9 23:46:07.524616 systemd[1]: Started sshd@20-10.200.20.12:22-10.200.16.10:36588.service - OpenSSH per-connection server daemon (10.200.16.10:36588). Sep 9 23:46:07.940959 sshd[4894]: Accepted publickey for core from 10.200.16.10 port 36588 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:46:07.942247 sshd-session[4894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:07.946055 systemd-logind[1852]: New session 23 of user core. Sep 9 23:46:07.955070 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 23:46:08.298527 sshd[4897]: Connection closed by 10.200.16.10 port 36588 Sep 9 23:46:08.299217 sshd-session[4894]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:08.303119 systemd-logind[1852]: Session 23 logged out. Waiting for processes to exit. Sep 9 23:46:08.303688 systemd[1]: sshd@20-10.200.20.12:22-10.200.16.10:36588.service: Deactivated successfully. Sep 9 23:46:08.306312 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 23:46:08.309561 systemd-logind[1852]: Removed session 23. Sep 9 23:46:08.396036 systemd[1]: Started sshd@21-10.200.20.12:22-10.200.16.10:36594.service - OpenSSH per-connection server daemon (10.200.16.10:36594). Sep 9 23:46:08.895132 sshd[4908]: Accepted publickey for core from 10.200.16.10 port 36594 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:46:08.896245 sshd-session[4908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:08.899963 systemd-logind[1852]: New session 24 of user core. Sep 9 23:46:08.905967 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 23:46:10.597536 containerd[1869]: time="2025-09-09T23:46:10.597484809Z" level=info msg="StopContainer for \"59a25278876187d0af72a2ecac7d03f8d6b2d61ae88d9f664003c9dfc2b699e4\" with timeout 30 (s)" Sep 9 23:46:10.599171 containerd[1869]: time="2025-09-09T23:46:10.598431475Z" level=info msg="Stop container \"59a25278876187d0af72a2ecac7d03f8d6b2d61ae88d9f664003c9dfc2b699e4\" with signal terminated" Sep 9 23:46:10.614641 systemd[1]: cri-containerd-59a25278876187d0af72a2ecac7d03f8d6b2d61ae88d9f664003c9dfc2b699e4.scope: Deactivated successfully. Sep 9 23:46:10.617690 containerd[1869]: time="2025-09-09T23:46:10.617652451Z" level=info msg="received exit event container_id:\"59a25278876187d0af72a2ecac7d03f8d6b2d61ae88d9f664003c9dfc2b699e4\" id:\"59a25278876187d0af72a2ecac7d03f8d6b2d61ae88d9f664003c9dfc2b699e4\" pid:3935 exited_at:{seconds:1757461570 nanos:617407908}" Sep 9 23:46:10.618972 containerd[1869]: time="2025-09-09T23:46:10.618950543Z" level=info msg="TaskExit event in podsandbox handler container_id:\"59a25278876187d0af72a2ecac7d03f8d6b2d61ae88d9f664003c9dfc2b699e4\" id:\"59a25278876187d0af72a2ecac7d03f8d6b2d61ae88d9f664003c9dfc2b699e4\" pid:3935 exited_at:{seconds:1757461570 nanos:617407908}" Sep 9 23:46:10.637508 containerd[1869]: time="2025-09-09T23:46:10.637470459Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 23:46:10.644819 containerd[1869]: time="2025-09-09T23:46:10.644766094Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5288a078629a72c83a1ce9d446dc428229c314f7eab9522f592a40dd1e4d62b\" id:\"133e2ebea7e0d21750a024d46dd4fc7b3d01721f0cc7d34d41e5ef97b44202f4\" pid:4937 exited_at:{seconds:1757461570 nanos:644043866}" Sep 9 23:46:10.647276 containerd[1869]: time="2025-09-09T23:46:10.647236459Z" level=info msg="StopContainer for \"a5288a078629a72c83a1ce9d446dc428229c314f7eab9522f592a40dd1e4d62b\" with timeout 2 (s)" Sep 9 23:46:10.647747 containerd[1869]: time="2025-09-09T23:46:10.647632126Z" level=info msg="Stop container \"a5288a078629a72c83a1ce9d446dc428229c314f7eab9522f592a40dd1e4d62b\" with signal terminated" Sep 9 23:46:10.659822 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59a25278876187d0af72a2ecac7d03f8d6b2d61ae88d9f664003c9dfc2b699e4-rootfs.mount: Deactivated successfully. Sep 9 23:46:10.666825 systemd-networkd[1687]: lxc_health: Link DOWN Sep 9 23:46:10.666832 systemd-networkd[1687]: lxc_health: Lost carrier Sep 9 23:46:10.679242 systemd[1]: cri-containerd-a5288a078629a72c83a1ce9d446dc428229c314f7eab9522f592a40dd1e4d62b.scope: Deactivated successfully. Sep 9 23:46:10.680943 systemd[1]: cri-containerd-a5288a078629a72c83a1ce9d446dc428229c314f7eab9522f592a40dd1e4d62b.scope: Consumed 4.876s CPU time, 124.8M memory peak, 128K read from disk, 12.9M written to disk. Sep 9 23:46:10.682094 containerd[1869]: time="2025-09-09T23:46:10.682060925Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5288a078629a72c83a1ce9d446dc428229c314f7eab9522f592a40dd1e4d62b\" id:\"a5288a078629a72c83a1ce9d446dc428229c314f7eab9522f592a40dd1e4d62b\" pid:4004 exited_at:{seconds:1757461570 nanos:681530142}" Sep 9 23:46:10.682361 containerd[1869]: time="2025-09-09T23:46:10.682243018Z" level=info msg="received exit event container_id:\"a5288a078629a72c83a1ce9d446dc428229c314f7eab9522f592a40dd1e4d62b\" id:\"a5288a078629a72c83a1ce9d446dc428229c314f7eab9522f592a40dd1e4d62b\" pid:4004 exited_at:{seconds:1757461570 nanos:681530142}" Sep 9 23:46:10.698211 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5288a078629a72c83a1ce9d446dc428229c314f7eab9522f592a40dd1e4d62b-rootfs.mount: Deactivated successfully. Sep 9 23:46:10.724865 containerd[1869]: time="2025-09-09T23:46:10.724394720Z" level=info msg="StopContainer for \"a5288a078629a72c83a1ce9d446dc428229c314f7eab9522f592a40dd1e4d62b\" returns successfully" Sep 9 23:46:10.725591 containerd[1869]: time="2025-09-09T23:46:10.725112780Z" level=info msg="StopPodSandbox for \"da03f03840ad556630d2da975c0de70a75e4018fdbde2480e417120a1af87e61\"" Sep 9 23:46:10.725591 containerd[1869]: time="2025-09-09T23:46:10.725236031Z" level=info msg="Container to stop \"e3679c3349d42e4f8820de43285f73d3adc6e68248b2c342e5ce61004e827e0c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 23:46:10.725591 containerd[1869]: time="2025-09-09T23:46:10.725245528Z" level=info msg="Container to stop \"4c284800e427235e9d3209e80b5c2dfb08cf5c444dbacd9a9ea53f74ad52ecc0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 23:46:10.725591 containerd[1869]: time="2025-09-09T23:46:10.725251392Z" level=info msg="Container to stop \"991a49685d24497ebb8ab0f20e3e3bdad6df1944b33f2f82b350d20eb88480c3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 23:46:10.725591 containerd[1869]: time="2025-09-09T23:46:10.725256568Z" level=info msg="Container to stop \"66ba5415ddd95e5866bd44330ca79d9f866d199a018c05571cb87a2688a8f56a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 23:46:10.725591 containerd[1869]: time="2025-09-09T23:46:10.725261448Z" level=info msg="Container to stop \"a5288a078629a72c83a1ce9d446dc428229c314f7eab9522f592a40dd1e4d62b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 23:46:10.734060 systemd[1]: cri-containerd-da03f03840ad556630d2da975c0de70a75e4018fdbde2480e417120a1af87e61.scope: Deactivated successfully. Sep 9 23:46:10.739254 containerd[1869]: time="2025-09-09T23:46:10.733862152Z" level=info msg="StopContainer for \"59a25278876187d0af72a2ecac7d03f8d6b2d61ae88d9f664003c9dfc2b699e4\" returns successfully" Sep 9 23:46:10.739772 containerd[1869]: time="2025-09-09T23:46:10.738293243Z" level=info msg="TaskExit event in podsandbox handler container_id:\"da03f03840ad556630d2da975c0de70a75e4018fdbde2480e417120a1af87e61\" id:\"da03f03840ad556630d2da975c0de70a75e4018fdbde2480e417120a1af87e61\" pid:3478 exit_status:137 exited_at:{seconds:1757461570 nanos:734771593}" Sep 9 23:46:10.740658 containerd[1869]: time="2025-09-09T23:46:10.740624500Z" level=info msg="StopPodSandbox for \"a30bef708c7a11d138b049af781024f9742de1c22866e33707f314e996e5ac93\"" Sep 9 23:46:10.740871 containerd[1869]: time="2025-09-09T23:46:10.740674029Z" level=info msg="Container to stop \"59a25278876187d0af72a2ecac7d03f8d6b2d61ae88d9f664003c9dfc2b699e4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 23:46:10.754108 systemd[1]: cri-containerd-a30bef708c7a11d138b049af781024f9742de1c22866e33707f314e996e5ac93.scope: Deactivated successfully. Sep 9 23:46:10.773731 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da03f03840ad556630d2da975c0de70a75e4018fdbde2480e417120a1af87e61-rootfs.mount: Deactivated successfully. Sep 9 23:46:10.777384 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a30bef708c7a11d138b049af781024f9742de1c22866e33707f314e996e5ac93-rootfs.mount: Deactivated successfully. Sep 9 23:46:10.789617 containerd[1869]: time="2025-09-09T23:46:10.789322616Z" level=info msg="shim disconnected" id=a30bef708c7a11d138b049af781024f9742de1c22866e33707f314e996e5ac93 namespace=k8s.io Sep 9 23:46:10.789959 containerd[1869]: time="2025-09-09T23:46:10.789611632Z" level=warning msg="cleaning up after shim disconnected" id=a30bef708c7a11d138b049af781024f9742de1c22866e33707f314e996e5ac93 namespace=k8s.io Sep 9 23:46:10.789959 containerd[1869]: time="2025-09-09T23:46:10.789898560Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 23:46:10.790117 containerd[1869]: time="2025-09-09T23:46:10.789846607Z" level=info msg="shim disconnected" id=da03f03840ad556630d2da975c0de70a75e4018fdbde2480e417120a1af87e61 namespace=k8s.io Sep 9 23:46:10.790163 containerd[1869]: time="2025-09-09T23:46:10.790108374Z" level=warning msg="cleaning up after shim disconnected" id=da03f03840ad556630d2da975c0de70a75e4018fdbde2480e417120a1af87e61 namespace=k8s.io Sep 9 23:46:10.790163 containerd[1869]: time="2025-09-09T23:46:10.790126783Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 23:46:10.804679 containerd[1869]: time="2025-09-09T23:46:10.803325254Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a30bef708c7a11d138b049af781024f9742de1c22866e33707f314e996e5ac93\" id:\"a30bef708c7a11d138b049af781024f9742de1c22866e33707f314e996e5ac93\" pid:3524 exit_status:137 exited_at:{seconds:1757461570 nanos:757185337}" Sep 9 23:46:10.805013 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-da03f03840ad556630d2da975c0de70a75e4018fdbde2480e417120a1af87e61-shm.mount: Deactivated successfully. Sep 9 23:46:10.805996 containerd[1869]: time="2025-09-09T23:46:10.805957696Z" level=info msg="received exit event sandbox_id:\"a30bef708c7a11d138b049af781024f9742de1c22866e33707f314e996e5ac93\" exit_status:137 exited_at:{seconds:1757461570 nanos:757185337}" Sep 9 23:46:10.806426 containerd[1869]: time="2025-09-09T23:46:10.806306961Z" level=info msg="TearDown network for sandbox \"a30bef708c7a11d138b049af781024f9742de1c22866e33707f314e996e5ac93\" successfully" Sep 9 23:46:10.806426 containerd[1869]: time="2025-09-09T23:46:10.806332690Z" level=info msg="StopPodSandbox for \"a30bef708c7a11d138b049af781024f9742de1c22866e33707f314e996e5ac93\" returns successfully" Sep 9 23:46:10.806426 containerd[1869]: time="2025-09-09T23:46:10.805038646Z" level=info msg="received exit event sandbox_id:\"da03f03840ad556630d2da975c0de70a75e4018fdbde2480e417120a1af87e61\" exit_status:137 exited_at:{seconds:1757461570 nanos:734771593}" Sep 9 23:46:10.806657 containerd[1869]: time="2025-09-09T23:46:10.806640891Z" level=info msg="TearDown network for sandbox \"da03f03840ad556630d2da975c0de70a75e4018fdbde2480e417120a1af87e61\" successfully" Sep 9 23:46:10.806882 containerd[1869]: time="2025-09-09T23:46:10.806863417Z" level=info msg="StopPodSandbox for \"da03f03840ad556630d2da975c0de70a75e4018fdbde2480e417120a1af87e61\" returns successfully" Sep 9 23:46:10.837858 kubelet[3373]: I0909 23:46:10.837819 3373 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/82c5441b-f90f-4269-a357-5ae7931672ba-xtables-lock\") pod \"82c5441b-f90f-4269-a357-5ae7931672ba\" (UID: \"82c5441b-f90f-4269-a357-5ae7931672ba\") " Sep 9 23:46:10.837858 kubelet[3373]: I0909 23:46:10.837862 3373 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/82c5441b-f90f-4269-a357-5ae7931672ba-hubble-tls\") pod \"82c5441b-f90f-4269-a357-5ae7931672ba\" (UID: \"82c5441b-f90f-4269-a357-5ae7931672ba\") " Sep 9 23:46:10.838343 kubelet[3373]: I0909 23:46:10.837877 3373 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/82c5441b-f90f-4269-a357-5ae7931672ba-cilium-config-path\") pod \"82c5441b-f90f-4269-a357-5ae7931672ba\" (UID: \"82c5441b-f90f-4269-a357-5ae7931672ba\") " Sep 9 23:46:10.838343 kubelet[3373]: I0909 23:46:10.837892 3373 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/82c5441b-f90f-4269-a357-5ae7931672ba-host-proc-sys-kernel\") pod \"82c5441b-f90f-4269-a357-5ae7931672ba\" (UID: \"82c5441b-f90f-4269-a357-5ae7931672ba\") " Sep 9 23:46:10.838343 kubelet[3373]: I0909 23:46:10.837904 3373 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/82c5441b-f90f-4269-a357-5ae7931672ba-clustermesh-secrets\") pod \"82c5441b-f90f-4269-a357-5ae7931672ba\" (UID: \"82c5441b-f90f-4269-a357-5ae7931672ba\") " Sep 9 23:46:10.838343 kubelet[3373]: I0909 23:46:10.837915 3373 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/82c5441b-f90f-4269-a357-5ae7931672ba-cni-path\") pod \"82c5441b-f90f-4269-a357-5ae7931672ba\" (UID: \"82c5441b-f90f-4269-a357-5ae7931672ba\") " Sep 9 23:46:10.838343 kubelet[3373]: I0909 23:46:10.837925 3373 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/82c5441b-f90f-4269-a357-5ae7931672ba-cilium-cgroup\") pod \"82c5441b-f90f-4269-a357-5ae7931672ba\" (UID: \"82c5441b-f90f-4269-a357-5ae7931672ba\") " Sep 9 23:46:10.838343 kubelet[3373]: I0909 23:46:10.837935 3373 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/82c5441b-f90f-4269-a357-5ae7931672ba-cilium-run\") pod \"82c5441b-f90f-4269-a357-5ae7931672ba\" (UID: \"82c5441b-f90f-4269-a357-5ae7931672ba\") " Sep 9 23:46:10.838493 kubelet[3373]: I0909 23:46:10.837945 3373 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/82c5441b-f90f-4269-a357-5ae7931672ba-hostproc\") pod \"82c5441b-f90f-4269-a357-5ae7931672ba\" (UID: \"82c5441b-f90f-4269-a357-5ae7931672ba\") " Sep 9 23:46:10.838493 kubelet[3373]: I0909 23:46:10.837954 3373 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/82c5441b-f90f-4269-a357-5ae7931672ba-bpf-maps\") pod \"82c5441b-f90f-4269-a357-5ae7931672ba\" (UID: \"82c5441b-f90f-4269-a357-5ae7931672ba\") " Sep 9 23:46:10.838493 kubelet[3373]: I0909 23:46:10.837965 3373 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vg78p\" (UniqueName: \"kubernetes.io/projected/4367c7c1-a27c-48d4-84f1-e5c3ca0735ae-kube-api-access-vg78p\") pod \"4367c7c1-a27c-48d4-84f1-e5c3ca0735ae\" (UID: \"4367c7c1-a27c-48d4-84f1-e5c3ca0735ae\") " Sep 9 23:46:10.838493 kubelet[3373]: I0909 23:46:10.837976 3373 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/82c5441b-f90f-4269-a357-5ae7931672ba-host-proc-sys-net\") pod \"82c5441b-f90f-4269-a357-5ae7931672ba\" (UID: \"82c5441b-f90f-4269-a357-5ae7931672ba\") " Sep 9 23:46:10.838493 kubelet[3373]: I0909 23:46:10.837988 3373 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4367c7c1-a27c-48d4-84f1-e5c3ca0735ae-cilium-config-path\") pod \"4367c7c1-a27c-48d4-84f1-e5c3ca0735ae\" (UID: \"4367c7c1-a27c-48d4-84f1-e5c3ca0735ae\") " Sep 9 23:46:10.838493 kubelet[3373]: I0909 23:46:10.837997 3373 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/82c5441b-f90f-4269-a357-5ae7931672ba-etc-cni-netd\") pod \"82c5441b-f90f-4269-a357-5ae7931672ba\" (UID: \"82c5441b-f90f-4269-a357-5ae7931672ba\") " Sep 9 23:46:10.838629 kubelet[3373]: I0909 23:46:10.838008 3373 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rh92l\" (UniqueName: \"kubernetes.io/projected/82c5441b-f90f-4269-a357-5ae7931672ba-kube-api-access-rh92l\") pod \"82c5441b-f90f-4269-a357-5ae7931672ba\" (UID: \"82c5441b-f90f-4269-a357-5ae7931672ba\") " Sep 9 23:46:10.838629 kubelet[3373]: I0909 23:46:10.838016 3373 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/82c5441b-f90f-4269-a357-5ae7931672ba-lib-modules\") pod \"82c5441b-f90f-4269-a357-5ae7931672ba\" (UID: \"82c5441b-f90f-4269-a357-5ae7931672ba\") " Sep 9 23:46:10.838629 kubelet[3373]: I0909 23:46:10.838078 3373 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82c5441b-f90f-4269-a357-5ae7931672ba-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "82c5441b-f90f-4269-a357-5ae7931672ba" (UID: "82c5441b-f90f-4269-a357-5ae7931672ba"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 23:46:10.838629 kubelet[3373]: I0909 23:46:10.838103 3373 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82c5441b-f90f-4269-a357-5ae7931672ba-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "82c5441b-f90f-4269-a357-5ae7931672ba" (UID: "82c5441b-f90f-4269-a357-5ae7931672ba"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 23:46:10.838860 kubelet[3373]: I0909 23:46:10.838827 3373 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82c5441b-f90f-4269-a357-5ae7931672ba-hostproc" (OuterVolumeSpecName: "hostproc") pod "82c5441b-f90f-4269-a357-5ae7931672ba" (UID: "82c5441b-f90f-4269-a357-5ae7931672ba"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 23:46:10.840278 kubelet[3373]: I0909 23:46:10.840230 3373 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82c5441b-f90f-4269-a357-5ae7931672ba-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "82c5441b-f90f-4269-a357-5ae7931672ba" (UID: "82c5441b-f90f-4269-a357-5ae7931672ba"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 23:46:10.841604 kubelet[3373]: I0909 23:46:10.841574 3373 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82c5441b-f90f-4269-a357-5ae7931672ba-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "82c5441b-f90f-4269-a357-5ae7931672ba" (UID: "82c5441b-f90f-4269-a357-5ae7931672ba"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 23:46:10.842220 kubelet[3373]: I0909 23:46:10.842185 3373 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82c5441b-f90f-4269-a357-5ae7931672ba-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "82c5441b-f90f-4269-a357-5ae7931672ba" (UID: "82c5441b-f90f-4269-a357-5ae7931672ba"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 23:46:10.844022 kubelet[3373]: I0909 23:46:10.843919 3373 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82c5441b-f90f-4269-a357-5ae7931672ba-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "82c5441b-f90f-4269-a357-5ae7931672ba" (UID: "82c5441b-f90f-4269-a357-5ae7931672ba"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 23:46:10.845031 kubelet[3373]: I0909 23:46:10.844993 3373 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82c5441b-f90f-4269-a357-5ae7931672ba-cni-path" (OuterVolumeSpecName: "cni-path") pod "82c5441b-f90f-4269-a357-5ae7931672ba" (UID: "82c5441b-f90f-4269-a357-5ae7931672ba"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 23:46:10.845031 kubelet[3373]: I0909 23:46:10.845033 3373 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82c5441b-f90f-4269-a357-5ae7931672ba-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "82c5441b-f90f-4269-a357-5ae7931672ba" (UID: "82c5441b-f90f-4269-a357-5ae7931672ba"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 23:46:10.845227 kubelet[3373]: I0909 23:46:10.845044 3373 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82c5441b-f90f-4269-a357-5ae7931672ba-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "82c5441b-f90f-4269-a357-5ae7931672ba" (UID: "82c5441b-f90f-4269-a357-5ae7931672ba"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 23:46:10.846753 kubelet[3373]: I0909 23:46:10.846709 3373 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4367c7c1-a27c-48d4-84f1-e5c3ca0735ae-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4367c7c1-a27c-48d4-84f1-e5c3ca0735ae" (UID: "4367c7c1-a27c-48d4-84f1-e5c3ca0735ae"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 9 23:46:10.847656 kubelet[3373]: I0909 23:46:10.847504 3373 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82c5441b-f90f-4269-a357-5ae7931672ba-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "82c5441b-f90f-4269-a357-5ae7931672ba" (UID: "82c5441b-f90f-4269-a357-5ae7931672ba"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 9 23:46:10.847656 kubelet[3373]: I0909 23:46:10.847591 3373 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4367c7c1-a27c-48d4-84f1-e5c3ca0735ae-kube-api-access-vg78p" (OuterVolumeSpecName: "kube-api-access-vg78p") pod "4367c7c1-a27c-48d4-84f1-e5c3ca0735ae" (UID: "4367c7c1-a27c-48d4-84f1-e5c3ca0735ae"). InnerVolumeSpecName "kube-api-access-vg78p". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 23:46:10.849109 kubelet[3373]: I0909 23:46:10.849080 3373 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82c5441b-f90f-4269-a357-5ae7931672ba-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "82c5441b-f90f-4269-a357-5ae7931672ba" (UID: "82c5441b-f90f-4269-a357-5ae7931672ba"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 9 23:46:10.849676 kubelet[3373]: I0909 23:46:10.849647 3373 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82c5441b-f90f-4269-a357-5ae7931672ba-kube-api-access-rh92l" (OuterVolumeSpecName: "kube-api-access-rh92l") pod "82c5441b-f90f-4269-a357-5ae7931672ba" (UID: "82c5441b-f90f-4269-a357-5ae7931672ba"). InnerVolumeSpecName "kube-api-access-rh92l". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 23:46:10.850275 kubelet[3373]: I0909 23:46:10.850251 3373 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82c5441b-f90f-4269-a357-5ae7931672ba-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "82c5441b-f90f-4269-a357-5ae7931672ba" (UID: "82c5441b-f90f-4269-a357-5ae7931672ba"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 23:46:10.938204 kubelet[3373]: I0909 23:46:10.938168 3373 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/82c5441b-f90f-4269-a357-5ae7931672ba-cilium-run\") on node \"ci-4426.0.0-n-3e4141976f\" DevicePath \"\"" Sep 9 23:46:10.938911 kubelet[3373]: I0909 23:46:10.938858 3373 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/82c5441b-f90f-4269-a357-5ae7931672ba-cilium-cgroup\") on node \"ci-4426.0.0-n-3e4141976f\" DevicePath \"\"" Sep 9 23:46:10.938911 kubelet[3373]: I0909 23:46:10.938877 3373 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/82c5441b-f90f-4269-a357-5ae7931672ba-hostproc\") on node \"ci-4426.0.0-n-3e4141976f\" DevicePath \"\"" Sep 9 23:46:10.938911 kubelet[3373]: I0909 23:46:10.938884 3373 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/82c5441b-f90f-4269-a357-5ae7931672ba-bpf-maps\") on node \"ci-4426.0.0-n-3e4141976f\" DevicePath \"\"" Sep 9 23:46:10.938911 kubelet[3373]: I0909 23:46:10.938892 3373 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vg78p\" (UniqueName: \"kubernetes.io/projected/4367c7c1-a27c-48d4-84f1-e5c3ca0735ae-kube-api-access-vg78p\") on node \"ci-4426.0.0-n-3e4141976f\" DevicePath \"\"" Sep 9 23:46:10.938911 kubelet[3373]: I0909 23:46:10.938899 3373 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/82c5441b-f90f-4269-a357-5ae7931672ba-host-proc-sys-net\") on node \"ci-4426.0.0-n-3e4141976f\" DevicePath \"\"" Sep 9 23:46:10.938911 kubelet[3373]: I0909 23:46:10.938907 3373 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4367c7c1-a27c-48d4-84f1-e5c3ca0735ae-cilium-config-path\") on node \"ci-4426.0.0-n-3e4141976f\" DevicePath \"\"" Sep 9 23:46:10.938911 kubelet[3373]: I0909 23:46:10.938914 3373 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rh92l\" (UniqueName: \"kubernetes.io/projected/82c5441b-f90f-4269-a357-5ae7931672ba-kube-api-access-rh92l\") on node \"ci-4426.0.0-n-3e4141976f\" DevicePath \"\"" Sep 9 23:46:10.938911 kubelet[3373]: I0909 23:46:10.938920 3373 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/82c5441b-f90f-4269-a357-5ae7931672ba-lib-modules\") on node \"ci-4426.0.0-n-3e4141976f\" DevicePath \"\"" Sep 9 23:46:10.939062 kubelet[3373]: I0909 23:46:10.938926 3373 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/82c5441b-f90f-4269-a357-5ae7931672ba-etc-cni-netd\") on node \"ci-4426.0.0-n-3e4141976f\" DevicePath \"\"" Sep 9 23:46:10.939062 kubelet[3373]: I0909 23:46:10.938932 3373 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/82c5441b-f90f-4269-a357-5ae7931672ba-xtables-lock\") on node \"ci-4426.0.0-n-3e4141976f\" DevicePath \"\"" Sep 9 23:46:10.939062 kubelet[3373]: I0909 23:46:10.938938 3373 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/82c5441b-f90f-4269-a357-5ae7931672ba-cilium-config-path\") on node \"ci-4426.0.0-n-3e4141976f\" DevicePath \"\"" Sep 9 23:46:10.939062 kubelet[3373]: I0909 23:46:10.938944 3373 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/82c5441b-f90f-4269-a357-5ae7931672ba-host-proc-sys-kernel\") on node \"ci-4426.0.0-n-3e4141976f\" DevicePath \"\"" Sep 9 23:46:10.939062 kubelet[3373]: I0909 23:46:10.938949 3373 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/82c5441b-f90f-4269-a357-5ae7931672ba-hubble-tls\") on node \"ci-4426.0.0-n-3e4141976f\" DevicePath \"\"" Sep 9 23:46:10.939062 kubelet[3373]: I0909 23:46:10.938956 3373 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/82c5441b-f90f-4269-a357-5ae7931672ba-clustermesh-secrets\") on node \"ci-4426.0.0-n-3e4141976f\" DevicePath \"\"" Sep 9 23:46:10.939062 kubelet[3373]: I0909 23:46:10.938961 3373 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/82c5441b-f90f-4269-a357-5ae7931672ba-cni-path\") on node \"ci-4426.0.0-n-3e4141976f\" DevicePath \"\"" Sep 9 23:46:11.206171 kubelet[3373]: I0909 23:46:11.205941 3373 scope.go:117] "RemoveContainer" containerID="59a25278876187d0af72a2ecac7d03f8d6b2d61ae88d9f664003c9dfc2b699e4" Sep 9 23:46:11.207758 containerd[1869]: time="2025-09-09T23:46:11.207716470Z" level=info msg="RemoveContainer for \"59a25278876187d0af72a2ecac7d03f8d6b2d61ae88d9f664003c9dfc2b699e4\"" Sep 9 23:46:11.212769 systemd[1]: Removed slice kubepods-besteffort-pod4367c7c1_a27c_48d4_84f1_e5c3ca0735ae.slice - libcontainer container kubepods-besteffort-pod4367c7c1_a27c_48d4_84f1_e5c3ca0735ae.slice. Sep 9 23:46:11.217295 containerd[1869]: time="2025-09-09T23:46:11.217253815Z" level=info msg="RemoveContainer for \"59a25278876187d0af72a2ecac7d03f8d6b2d61ae88d9f664003c9dfc2b699e4\" returns successfully" Sep 9 23:46:11.217761 kubelet[3373]: I0909 23:46:11.217745 3373 scope.go:117] "RemoveContainer" containerID="59a25278876187d0af72a2ecac7d03f8d6b2d61ae88d9f664003c9dfc2b699e4" Sep 9 23:46:11.218242 containerd[1869]: time="2025-09-09T23:46:11.218214170Z" level=error msg="ContainerStatus for \"59a25278876187d0af72a2ecac7d03f8d6b2d61ae88d9f664003c9dfc2b699e4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"59a25278876187d0af72a2ecac7d03f8d6b2d61ae88d9f664003c9dfc2b699e4\": not found" Sep 9 23:46:11.218584 kubelet[3373]: E0909 23:46:11.218565 3373 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"59a25278876187d0af72a2ecac7d03f8d6b2d61ae88d9f664003c9dfc2b699e4\": not found" containerID="59a25278876187d0af72a2ecac7d03f8d6b2d61ae88d9f664003c9dfc2b699e4" Sep 9 23:46:11.218772 kubelet[3373]: I0909 23:46:11.218654 3373 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"59a25278876187d0af72a2ecac7d03f8d6b2d61ae88d9f664003c9dfc2b699e4"} err="failed to get container status \"59a25278876187d0af72a2ecac7d03f8d6b2d61ae88d9f664003c9dfc2b699e4\": rpc error: code = NotFound desc = an error occurred when try to find container \"59a25278876187d0af72a2ecac7d03f8d6b2d61ae88d9f664003c9dfc2b699e4\": not found" Sep 9 23:46:11.218916 kubelet[3373]: I0909 23:46:11.218905 3373 scope.go:117] "RemoveContainer" containerID="a5288a078629a72c83a1ce9d446dc428229c314f7eab9522f592a40dd1e4d62b" Sep 9 23:46:11.220763 containerd[1869]: time="2025-09-09T23:46:11.220739049Z" level=info msg="RemoveContainer for \"a5288a078629a72c83a1ce9d446dc428229c314f7eab9522f592a40dd1e4d62b\"" Sep 9 23:46:11.223865 systemd[1]: Removed slice kubepods-burstable-pod82c5441b_f90f_4269_a357_5ae7931672ba.slice - libcontainer container kubepods-burstable-pod82c5441b_f90f_4269_a357_5ae7931672ba.slice. Sep 9 23:46:11.223955 systemd[1]: kubepods-burstable-pod82c5441b_f90f_4269_a357_5ae7931672ba.slice: Consumed 4.940s CPU time, 125.2M memory peak, 128K read from disk, 12.9M written to disk. Sep 9 23:46:11.233164 containerd[1869]: time="2025-09-09T23:46:11.233037071Z" level=info msg="RemoveContainer for \"a5288a078629a72c83a1ce9d446dc428229c314f7eab9522f592a40dd1e4d62b\" returns successfully" Sep 9 23:46:11.233252 kubelet[3373]: I0909 23:46:11.233234 3373 scope.go:117] "RemoveContainer" containerID="e3679c3349d42e4f8820de43285f73d3adc6e68248b2c342e5ce61004e827e0c" Sep 9 23:46:11.236462 containerd[1869]: time="2025-09-09T23:46:11.236436886Z" level=info msg="RemoveContainer for \"e3679c3349d42e4f8820de43285f73d3adc6e68248b2c342e5ce61004e827e0c\"" Sep 9 23:46:11.246401 containerd[1869]: time="2025-09-09T23:46:11.245755417Z" level=info msg="RemoveContainer for \"e3679c3349d42e4f8820de43285f73d3adc6e68248b2c342e5ce61004e827e0c\" returns successfully" Sep 9 23:46:11.247078 kubelet[3373]: I0909 23:46:11.247049 3373 scope.go:117] "RemoveContainer" containerID="66ba5415ddd95e5866bd44330ca79d9f866d199a018c05571cb87a2688a8f56a" Sep 9 23:46:11.249192 containerd[1869]: time="2025-09-09T23:46:11.249173761Z" level=info msg="RemoveContainer for \"66ba5415ddd95e5866bd44330ca79d9f866d199a018c05571cb87a2688a8f56a\"" Sep 9 23:46:11.257764 containerd[1869]: time="2025-09-09T23:46:11.257213529Z" level=info msg="RemoveContainer for \"66ba5415ddd95e5866bd44330ca79d9f866d199a018c05571cb87a2688a8f56a\" returns successfully" Sep 9 23:46:11.257866 kubelet[3373]: I0909 23:46:11.257385 3373 scope.go:117] "RemoveContainer" containerID="991a49685d24497ebb8ab0f20e3e3bdad6df1944b33f2f82b350d20eb88480c3" Sep 9 23:46:11.258856 containerd[1869]: time="2025-09-09T23:46:11.258836414Z" level=info msg="RemoveContainer for \"991a49685d24497ebb8ab0f20e3e3bdad6df1944b33f2f82b350d20eb88480c3\"" Sep 9 23:46:11.267048 containerd[1869]: time="2025-09-09T23:46:11.267015826Z" level=info msg="RemoveContainer for \"991a49685d24497ebb8ab0f20e3e3bdad6df1944b33f2f82b350d20eb88480c3\" returns successfully" Sep 9 23:46:11.267305 kubelet[3373]: I0909 23:46:11.267271 3373 scope.go:117] "RemoveContainer" containerID="4c284800e427235e9d3209e80b5c2dfb08cf5c444dbacd9a9ea53f74ad52ecc0" Sep 9 23:46:11.268737 containerd[1869]: time="2025-09-09T23:46:11.268717289Z" level=info msg="RemoveContainer for \"4c284800e427235e9d3209e80b5c2dfb08cf5c444dbacd9a9ea53f74ad52ecc0\"" Sep 9 23:46:11.276207 containerd[1869]: time="2025-09-09T23:46:11.276084686Z" level=info msg="RemoveContainer for \"4c284800e427235e9d3209e80b5c2dfb08cf5c444dbacd9a9ea53f74ad52ecc0\" returns successfully" Sep 9 23:46:11.276418 kubelet[3373]: I0909 23:46:11.276385 3373 scope.go:117] "RemoveContainer" containerID="a5288a078629a72c83a1ce9d446dc428229c314f7eab9522f592a40dd1e4d62b" Sep 9 23:46:11.276647 containerd[1869]: time="2025-09-09T23:46:11.276614725Z" level=error msg="ContainerStatus for \"a5288a078629a72c83a1ce9d446dc428229c314f7eab9522f592a40dd1e4d62b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a5288a078629a72c83a1ce9d446dc428229c314f7eab9522f592a40dd1e4d62b\": not found" Sep 9 23:46:11.276770 kubelet[3373]: E0909 23:46:11.276750 3373 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a5288a078629a72c83a1ce9d446dc428229c314f7eab9522f592a40dd1e4d62b\": not found" containerID="a5288a078629a72c83a1ce9d446dc428229c314f7eab9522f592a40dd1e4d62b" Sep 9 23:46:11.276917 kubelet[3373]: I0909 23:46:11.276895 3373 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a5288a078629a72c83a1ce9d446dc428229c314f7eab9522f592a40dd1e4d62b"} err="failed to get container status \"a5288a078629a72c83a1ce9d446dc428229c314f7eab9522f592a40dd1e4d62b\": rpc error: code = NotFound desc = an error occurred when try to find container \"a5288a078629a72c83a1ce9d446dc428229c314f7eab9522f592a40dd1e4d62b\": not found" Sep 9 23:46:11.277072 kubelet[3373]: I0909 23:46:11.276989 3373 scope.go:117] "RemoveContainer" containerID="e3679c3349d42e4f8820de43285f73d3adc6e68248b2c342e5ce61004e827e0c" Sep 9 23:46:11.277251 containerd[1869]: time="2025-09-09T23:46:11.277167260Z" level=error msg="ContainerStatus for \"e3679c3349d42e4f8820de43285f73d3adc6e68248b2c342e5ce61004e827e0c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e3679c3349d42e4f8820de43285f73d3adc6e68248b2c342e5ce61004e827e0c\": not found" Sep 9 23:46:11.277359 kubelet[3373]: E0909 23:46:11.277337 3373 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e3679c3349d42e4f8820de43285f73d3adc6e68248b2c342e5ce61004e827e0c\": not found" containerID="e3679c3349d42e4f8820de43285f73d3adc6e68248b2c342e5ce61004e827e0c" Sep 9 23:46:11.277397 kubelet[3373]: I0909 23:46:11.277363 3373 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e3679c3349d42e4f8820de43285f73d3adc6e68248b2c342e5ce61004e827e0c"} err="failed to get container status \"e3679c3349d42e4f8820de43285f73d3adc6e68248b2c342e5ce61004e827e0c\": rpc error: code = NotFound desc = an error occurred when try to find container \"e3679c3349d42e4f8820de43285f73d3adc6e68248b2c342e5ce61004e827e0c\": not found" Sep 9 23:46:11.277397 kubelet[3373]: I0909 23:46:11.277380 3373 scope.go:117] "RemoveContainer" containerID="66ba5415ddd95e5866bd44330ca79d9f866d199a018c05571cb87a2688a8f56a" Sep 9 23:46:11.277597 containerd[1869]: time="2025-09-09T23:46:11.277567319Z" level=error msg="ContainerStatus for \"66ba5415ddd95e5866bd44330ca79d9f866d199a018c05571cb87a2688a8f56a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"66ba5415ddd95e5866bd44330ca79d9f866d199a018c05571cb87a2688a8f56a\": not found" Sep 9 23:46:11.277849 kubelet[3373]: E0909 23:46:11.277834 3373 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"66ba5415ddd95e5866bd44330ca79d9f866d199a018c05571cb87a2688a8f56a\": not found" containerID="66ba5415ddd95e5866bd44330ca79d9f866d199a018c05571cb87a2688a8f56a" Sep 9 23:46:11.278037 kubelet[3373]: I0909 23:46:11.277941 3373 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"66ba5415ddd95e5866bd44330ca79d9f866d199a018c05571cb87a2688a8f56a"} err="failed to get container status \"66ba5415ddd95e5866bd44330ca79d9f866d199a018c05571cb87a2688a8f56a\": rpc error: code = NotFound desc = an error occurred when try to find container \"66ba5415ddd95e5866bd44330ca79d9f866d199a018c05571cb87a2688a8f56a\": not found" Sep 9 23:46:11.278037 kubelet[3373]: I0909 23:46:11.277966 3373 scope.go:117] "RemoveContainer" containerID="991a49685d24497ebb8ab0f20e3e3bdad6df1944b33f2f82b350d20eb88480c3" Sep 9 23:46:11.278169 containerd[1869]: time="2025-09-09T23:46:11.278122703Z" level=error msg="ContainerStatus for \"991a49685d24497ebb8ab0f20e3e3bdad6df1944b33f2f82b350d20eb88480c3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"991a49685d24497ebb8ab0f20e3e3bdad6df1944b33f2f82b350d20eb88480c3\": not found" Sep 9 23:46:11.278271 kubelet[3373]: E0909 23:46:11.278255 3373 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"991a49685d24497ebb8ab0f20e3e3bdad6df1944b33f2f82b350d20eb88480c3\": not found" containerID="991a49685d24497ebb8ab0f20e3e3bdad6df1944b33f2f82b350d20eb88480c3" Sep 9 23:46:11.278354 kubelet[3373]: I0909 23:46:11.278337 3373 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"991a49685d24497ebb8ab0f20e3e3bdad6df1944b33f2f82b350d20eb88480c3"} err="failed to get container status \"991a49685d24497ebb8ab0f20e3e3bdad6df1944b33f2f82b350d20eb88480c3\": rpc error: code = NotFound desc = an error occurred when try to find container \"991a49685d24497ebb8ab0f20e3e3bdad6df1944b33f2f82b350d20eb88480c3\": not found" Sep 9 23:46:11.278409 kubelet[3373]: I0909 23:46:11.278399 3373 scope.go:117] "RemoveContainer" containerID="4c284800e427235e9d3209e80b5c2dfb08cf5c444dbacd9a9ea53f74ad52ecc0" Sep 9 23:46:11.278658 containerd[1869]: time="2025-09-09T23:46:11.278629837Z" level=error msg="ContainerStatus for \"4c284800e427235e9d3209e80b5c2dfb08cf5c444dbacd9a9ea53f74ad52ecc0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4c284800e427235e9d3209e80b5c2dfb08cf5c444dbacd9a9ea53f74ad52ecc0\": not found" Sep 9 23:46:11.278785 kubelet[3373]: E0909 23:46:11.278732 3373 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4c284800e427235e9d3209e80b5c2dfb08cf5c444dbacd9a9ea53f74ad52ecc0\": not found" containerID="4c284800e427235e9d3209e80b5c2dfb08cf5c444dbacd9a9ea53f74ad52ecc0" Sep 9 23:46:11.278872 kubelet[3373]: I0909 23:46:11.278787 3373 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4c284800e427235e9d3209e80b5c2dfb08cf5c444dbacd9a9ea53f74ad52ecc0"} err="failed to get container status \"4c284800e427235e9d3209e80b5c2dfb08cf5c444dbacd9a9ea53f74ad52ecc0\": rpc error: code = NotFound desc = an error occurred when try to find container \"4c284800e427235e9d3209e80b5c2dfb08cf5c444dbacd9a9ea53f74ad52ecc0\": not found" Sep 9 23:46:11.659605 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a30bef708c7a11d138b049af781024f9742de1c22866e33707f314e996e5ac93-shm.mount: Deactivated successfully. Sep 9 23:46:11.659695 systemd[1]: var-lib-kubelet-pods-4367c7c1\x2da27c\x2d48d4\x2d84f1\x2de5c3ca0735ae-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvg78p.mount: Deactivated successfully. Sep 9 23:46:11.659737 systemd[1]: var-lib-kubelet-pods-82c5441b\x2df90f\x2d4269\x2da357\x2d5ae7931672ba-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drh92l.mount: Deactivated successfully. Sep 9 23:46:11.659776 systemd[1]: var-lib-kubelet-pods-82c5441b\x2df90f\x2d4269\x2da357\x2d5ae7931672ba-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 9 23:46:11.659833 systemd[1]: var-lib-kubelet-pods-82c5441b\x2df90f\x2d4269\x2da357\x2d5ae7931672ba-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 9 23:46:12.632635 sshd[4911]: Connection closed by 10.200.16.10 port 36594 Sep 9 23:46:12.633671 sshd-session[4908]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:12.641405 systemd[1]: sshd@21-10.200.20.12:22-10.200.16.10:36594.service: Deactivated successfully. Sep 9 23:46:12.644401 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 23:46:12.645108 systemd-logind[1852]: Session 24 logged out. Waiting for processes to exit. Sep 9 23:46:12.646615 systemd-logind[1852]: Removed session 24. Sep 9 23:46:12.715346 systemd[1]: Started sshd@22-10.200.20.12:22-10.200.16.10:48390.service - OpenSSH per-connection server daemon (10.200.16.10:48390). Sep 9 23:46:12.812819 kubelet[3373]: I0909 23:46:12.812150 3373 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4367c7c1-a27c-48d4-84f1-e5c3ca0735ae" path="/var/lib/kubelet/pods/4367c7c1-a27c-48d4-84f1-e5c3ca0735ae/volumes" Sep 9 23:46:12.812819 kubelet[3373]: I0909 23:46:12.812429 3373 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82c5441b-f90f-4269-a357-5ae7931672ba" path="/var/lib/kubelet/pods/82c5441b-f90f-4269-a357-5ae7931672ba/volumes" Sep 9 23:46:13.177076 sshd[5064]: Accepted publickey for core from 10.200.16.10 port 48390 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:46:13.178178 sshd-session[5064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:13.182444 systemd-logind[1852]: New session 25 of user core. Sep 9 23:46:13.191022 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 23:46:13.814490 kubelet[3373]: E0909 23:46:13.814446 3373 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="82c5441b-f90f-4269-a357-5ae7931672ba" containerName="apply-sysctl-overwrites" Sep 9 23:46:13.814490 kubelet[3373]: E0909 23:46:13.814482 3373 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="82c5441b-f90f-4269-a357-5ae7931672ba" containerName="clean-cilium-state" Sep 9 23:46:13.814490 kubelet[3373]: E0909 23:46:13.814489 3373 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="82c5441b-f90f-4269-a357-5ae7931672ba" containerName="mount-cgroup" Sep 9 23:46:13.814490 kubelet[3373]: E0909 23:46:13.814493 3373 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="82c5441b-f90f-4269-a357-5ae7931672ba" containerName="mount-bpf-fs" Sep 9 23:46:13.814490 kubelet[3373]: E0909 23:46:13.814497 3373 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4367c7c1-a27c-48d4-84f1-e5c3ca0735ae" containerName="cilium-operator" Sep 9 23:46:13.814490 kubelet[3373]: E0909 23:46:13.814501 3373 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="82c5441b-f90f-4269-a357-5ae7931672ba" containerName="cilium-agent" Sep 9 23:46:13.815049 kubelet[3373]: I0909 23:46:13.814519 3373 memory_manager.go:354] "RemoveStaleState removing state" podUID="82c5441b-f90f-4269-a357-5ae7931672ba" containerName="cilium-agent" Sep 9 23:46:13.815049 kubelet[3373]: I0909 23:46:13.814523 3373 memory_manager.go:354] "RemoveStaleState removing state" podUID="4367c7c1-a27c-48d4-84f1-e5c3ca0735ae" containerName="cilium-operator" Sep 9 23:46:13.824406 systemd[1]: Created slice kubepods-burstable-pod98fbe302_4d15_405f_abd3_a548b34be52a.slice - libcontainer container kubepods-burstable-pod98fbe302_4d15_405f_abd3_a548b34be52a.slice. Sep 9 23:46:13.857120 kubelet[3373]: I0909 23:46:13.857079 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/98fbe302-4d15-405f-abd3-a548b34be52a-cilium-run\") pod \"cilium-r5kd2\" (UID: \"98fbe302-4d15-405f-abd3-a548b34be52a\") " pod="kube-system/cilium-r5kd2" Sep 9 23:46:13.858555 kubelet[3373]: I0909 23:46:13.857381 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98fbe302-4d15-405f-abd3-a548b34be52a-xtables-lock\") pod \"cilium-r5kd2\" (UID: \"98fbe302-4d15-405f-abd3-a548b34be52a\") " pod="kube-system/cilium-r5kd2" Sep 9 23:46:13.858816 kubelet[3373]: I0909 23:46:13.858760 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/98fbe302-4d15-405f-abd3-a548b34be52a-cilium-config-path\") pod \"cilium-r5kd2\" (UID: \"98fbe302-4d15-405f-abd3-a548b34be52a\") " pod="kube-system/cilium-r5kd2" Sep 9 23:46:13.858862 kubelet[3373]: I0909 23:46:13.858830 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/98fbe302-4d15-405f-abd3-a548b34be52a-hubble-tls\") pod \"cilium-r5kd2\" (UID: \"98fbe302-4d15-405f-abd3-a548b34be52a\") " pod="kube-system/cilium-r5kd2" Sep 9 23:46:13.858862 kubelet[3373]: I0909 23:46:13.858842 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/98fbe302-4d15-405f-abd3-a548b34be52a-bpf-maps\") pod \"cilium-r5kd2\" (UID: \"98fbe302-4d15-405f-abd3-a548b34be52a\") " pod="kube-system/cilium-r5kd2" Sep 9 23:46:13.858862 kubelet[3373]: I0909 23:46:13.858852 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/98fbe302-4d15-405f-abd3-a548b34be52a-cilium-cgroup\") pod \"cilium-r5kd2\" (UID: \"98fbe302-4d15-405f-abd3-a548b34be52a\") " pod="kube-system/cilium-r5kd2" Sep 9 23:46:13.858925 kubelet[3373]: I0909 23:46:13.858864 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/98fbe302-4d15-405f-abd3-a548b34be52a-host-proc-sys-kernel\") pod \"cilium-r5kd2\" (UID: \"98fbe302-4d15-405f-abd3-a548b34be52a\") " pod="kube-system/cilium-r5kd2" Sep 9 23:46:13.858925 kubelet[3373]: I0909 23:46:13.858874 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/98fbe302-4d15-405f-abd3-a548b34be52a-hostproc\") pod \"cilium-r5kd2\" (UID: \"98fbe302-4d15-405f-abd3-a548b34be52a\") " pod="kube-system/cilium-r5kd2" Sep 9 23:46:13.858925 kubelet[3373]: I0909 23:46:13.858884 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/98fbe302-4d15-405f-abd3-a548b34be52a-cni-path\") pod \"cilium-r5kd2\" (UID: \"98fbe302-4d15-405f-abd3-a548b34be52a\") " pod="kube-system/cilium-r5kd2" Sep 9 23:46:13.858925 kubelet[3373]: I0909 23:46:13.858892 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98fbe302-4d15-405f-abd3-a548b34be52a-etc-cni-netd\") pod \"cilium-r5kd2\" (UID: \"98fbe302-4d15-405f-abd3-a548b34be52a\") " pod="kube-system/cilium-r5kd2" Sep 9 23:46:13.858925 kubelet[3373]: I0909 23:46:13.858903 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/98fbe302-4d15-405f-abd3-a548b34be52a-cilium-ipsec-secrets\") pod \"cilium-r5kd2\" (UID: \"98fbe302-4d15-405f-abd3-a548b34be52a\") " pod="kube-system/cilium-r5kd2" Sep 9 23:46:13.858925 kubelet[3373]: I0909 23:46:13.858913 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/98fbe302-4d15-405f-abd3-a548b34be52a-host-proc-sys-net\") pod \"cilium-r5kd2\" (UID: \"98fbe302-4d15-405f-abd3-a548b34be52a\") " pod="kube-system/cilium-r5kd2" Sep 9 23:46:13.859039 kubelet[3373]: I0909 23:46:13.858924 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98fbe302-4d15-405f-abd3-a548b34be52a-lib-modules\") pod \"cilium-r5kd2\" (UID: \"98fbe302-4d15-405f-abd3-a548b34be52a\") " pod="kube-system/cilium-r5kd2" Sep 9 23:46:13.859039 kubelet[3373]: I0909 23:46:13.858933 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/98fbe302-4d15-405f-abd3-a548b34be52a-clustermesh-secrets\") pod \"cilium-r5kd2\" (UID: \"98fbe302-4d15-405f-abd3-a548b34be52a\") " pod="kube-system/cilium-r5kd2" Sep 9 23:46:13.859039 kubelet[3373]: I0909 23:46:13.858944 3373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqvck\" (UniqueName: \"kubernetes.io/projected/98fbe302-4d15-405f-abd3-a548b34be52a-kube-api-access-qqvck\") pod \"cilium-r5kd2\" (UID: \"98fbe302-4d15-405f-abd3-a548b34be52a\") " pod="kube-system/cilium-r5kd2" Sep 9 23:46:13.883158 sshd[5067]: Connection closed by 10.200.16.10 port 48390 Sep 9 23:46:13.884196 sshd-session[5064]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:13.893407 systemd[1]: sshd@22-10.200.20.12:22-10.200.16.10:48390.service: Deactivated successfully. Sep 9 23:46:13.896307 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 23:46:13.897709 systemd-logind[1852]: Session 25 logged out. Waiting for processes to exit. Sep 9 23:46:13.899987 systemd-logind[1852]: Removed session 25. Sep 9 23:46:13.955388 systemd[1]: Started sshd@23-10.200.20.12:22-10.200.16.10:48394.service - OpenSSH per-connection server daemon (10.200.16.10:48394). Sep 9 23:46:14.131378 containerd[1869]: time="2025-09-09T23:46:14.131266478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r5kd2,Uid:98fbe302-4d15-405f-abd3-a548b34be52a,Namespace:kube-system,Attempt:0,}" Sep 9 23:46:14.179243 containerd[1869]: time="2025-09-09T23:46:14.179201190Z" level=info msg="connecting to shim c8b1e8c8c077a9808ffbf9c4e74465c6cc75d1e1b667416673cfae842143d495" address="unix:///run/containerd/s/213259cbbd167056633942b0c49383747fe7a219d07752eb0a129d70086f3772" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:46:14.199936 systemd[1]: Started cri-containerd-c8b1e8c8c077a9808ffbf9c4e74465c6cc75d1e1b667416673cfae842143d495.scope - libcontainer container c8b1e8c8c077a9808ffbf9c4e74465c6cc75d1e1b667416673cfae842143d495. Sep 9 23:46:14.225710 containerd[1869]: time="2025-09-09T23:46:14.225674844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r5kd2,Uid:98fbe302-4d15-405f-abd3-a548b34be52a,Namespace:kube-system,Attempt:0,} returns sandbox id \"c8b1e8c8c077a9808ffbf9c4e74465c6cc75d1e1b667416673cfae842143d495\"" Sep 9 23:46:14.228708 containerd[1869]: time="2025-09-09T23:46:14.228675360Z" level=info msg="CreateContainer within sandbox \"c8b1e8c8c077a9808ffbf9c4e74465c6cc75d1e1b667416673cfae842143d495\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 23:46:14.244390 containerd[1869]: time="2025-09-09T23:46:14.244352660Z" level=info msg="Container 0ca2499aa5fb5e1632fdc99b2d11d1cd42087eae6c618cee4090688b4cb67361: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:46:14.260239 containerd[1869]: time="2025-09-09T23:46:14.260203246Z" level=info msg="CreateContainer within sandbox \"c8b1e8c8c077a9808ffbf9c4e74465c6cc75d1e1b667416673cfae842143d495\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0ca2499aa5fb5e1632fdc99b2d11d1cd42087eae6c618cee4090688b4cb67361\"" Sep 9 23:46:14.261797 containerd[1869]: time="2025-09-09T23:46:14.261768489Z" level=info msg="StartContainer for \"0ca2499aa5fb5e1632fdc99b2d11d1cd42087eae6c618cee4090688b4cb67361\"" Sep 9 23:46:14.262493 containerd[1869]: time="2025-09-09T23:46:14.262467685Z" level=info msg="connecting to shim 0ca2499aa5fb5e1632fdc99b2d11d1cd42087eae6c618cee4090688b4cb67361" address="unix:///run/containerd/s/213259cbbd167056633942b0c49383747fe7a219d07752eb0a129d70086f3772" protocol=ttrpc version=3 Sep 9 23:46:14.278943 systemd[1]: Started cri-containerd-0ca2499aa5fb5e1632fdc99b2d11d1cd42087eae6c618cee4090688b4cb67361.scope - libcontainer container 0ca2499aa5fb5e1632fdc99b2d11d1cd42087eae6c618cee4090688b4cb67361. Sep 9 23:46:14.308319 containerd[1869]: time="2025-09-09T23:46:14.308228727Z" level=info msg="StartContainer for \"0ca2499aa5fb5e1632fdc99b2d11d1cd42087eae6c618cee4090688b4cb67361\" returns successfully" Sep 9 23:46:14.313201 systemd[1]: cri-containerd-0ca2499aa5fb5e1632fdc99b2d11d1cd42087eae6c618cee4090688b4cb67361.scope: Deactivated successfully. Sep 9 23:46:14.317351 containerd[1869]: time="2025-09-09T23:46:14.317309924Z" level=info msg="received exit event container_id:\"0ca2499aa5fb5e1632fdc99b2d11d1cd42087eae6c618cee4090688b4cb67361\" id:\"0ca2499aa5fb5e1632fdc99b2d11d1cd42087eae6c618cee4090688b4cb67361\" pid:5142 exited_at:{seconds:1757461574 nanos:317096710}" Sep 9 23:46:14.317546 containerd[1869]: time="2025-09-09T23:46:14.317514658Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0ca2499aa5fb5e1632fdc99b2d11d1cd42087eae6c618cee4090688b4cb67361\" id:\"0ca2499aa5fb5e1632fdc99b2d11d1cd42087eae6c618cee4090688b4cb67361\" pid:5142 exited_at:{seconds:1757461574 nanos:317096710}" Sep 9 23:46:14.390349 sshd[5077]: Accepted publickey for core from 10.200.16.10 port 48394 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:46:14.391835 sshd-session[5077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:14.395876 systemd-logind[1852]: New session 26 of user core. Sep 9 23:46:14.404917 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 9 23:46:14.704660 sshd[5176]: Connection closed by 10.200.16.10 port 48394 Sep 9 23:46:14.708602 sshd-session[5077]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:14.711971 systemd[1]: sshd@23-10.200.20.12:22-10.200.16.10:48394.service: Deactivated successfully. Sep 9 23:46:14.714392 systemd[1]: session-26.scope: Deactivated successfully. Sep 9 23:46:14.715850 systemd-logind[1852]: Session 26 logged out. Waiting for processes to exit. Sep 9 23:46:14.717500 systemd-logind[1852]: Removed session 26. Sep 9 23:46:14.798051 systemd[1]: Started sshd@24-10.200.20.12:22-10.200.16.10:48410.service - OpenSSH per-connection server daemon (10.200.16.10:48410). Sep 9 23:46:14.826464 containerd[1869]: time="2025-09-09T23:46:14.826429777Z" level=info msg="StopPodSandbox for \"a30bef708c7a11d138b049af781024f9742de1c22866e33707f314e996e5ac93\"" Sep 9 23:46:14.826642 containerd[1869]: time="2025-09-09T23:46:14.826548325Z" level=info msg="TearDown network for sandbox \"a30bef708c7a11d138b049af781024f9742de1c22866e33707f314e996e5ac93\" successfully" Sep 9 23:46:14.826642 containerd[1869]: time="2025-09-09T23:46:14.826556773Z" level=info msg="StopPodSandbox for \"a30bef708c7a11d138b049af781024f9742de1c22866e33707f314e996e5ac93\" returns successfully" Sep 9 23:46:14.827854 containerd[1869]: time="2025-09-09T23:46:14.826884190Z" level=info msg="RemovePodSandbox for \"a30bef708c7a11d138b049af781024f9742de1c22866e33707f314e996e5ac93\"" Sep 9 23:46:14.827854 containerd[1869]: time="2025-09-09T23:46:14.826909199Z" level=info msg="Forcibly stopping sandbox \"a30bef708c7a11d138b049af781024f9742de1c22866e33707f314e996e5ac93\"" Sep 9 23:46:14.827854 containerd[1869]: time="2025-09-09T23:46:14.826962816Z" level=info msg="TearDown network for sandbox \"a30bef708c7a11d138b049af781024f9742de1c22866e33707f314e996e5ac93\" successfully" Sep 9 23:46:14.827854 containerd[1869]: time="2025-09-09T23:46:14.827713325Z" level=info msg="Ensure that sandbox a30bef708c7a11d138b049af781024f9742de1c22866e33707f314e996e5ac93 in task-service has been cleanup successfully" Sep 9 23:46:14.840247 containerd[1869]: time="2025-09-09T23:46:14.840211129Z" level=info msg="RemovePodSandbox \"a30bef708c7a11d138b049af781024f9742de1c22866e33707f314e996e5ac93\" returns successfully" Sep 9 23:46:14.840644 containerd[1869]: time="2025-09-09T23:46:14.840614100Z" level=info msg="StopPodSandbox for \"da03f03840ad556630d2da975c0de70a75e4018fdbde2480e417120a1af87e61\"" Sep 9 23:46:14.840713 containerd[1869]: time="2025-09-09T23:46:14.840695935Z" level=info msg="TearDown network for sandbox \"da03f03840ad556630d2da975c0de70a75e4018fdbde2480e417120a1af87e61\" successfully" Sep 9 23:46:14.840713 containerd[1869]: time="2025-09-09T23:46:14.840710855Z" level=info msg="StopPodSandbox for \"da03f03840ad556630d2da975c0de70a75e4018fdbde2480e417120a1af87e61\" returns successfully" Sep 9 23:46:14.841030 containerd[1869]: time="2025-09-09T23:46:14.840999751Z" level=info msg="RemovePodSandbox for \"da03f03840ad556630d2da975c0de70a75e4018fdbde2480e417120a1af87e61\"" Sep 9 23:46:14.841101 containerd[1869]: time="2025-09-09T23:46:14.841034520Z" level=info msg="Forcibly stopping sandbox \"da03f03840ad556630d2da975c0de70a75e4018fdbde2480e417120a1af87e61\"" Sep 9 23:46:14.841125 containerd[1869]: time="2025-09-09T23:46:14.841093146Z" level=info msg="TearDown network for sandbox \"da03f03840ad556630d2da975c0de70a75e4018fdbde2480e417120a1af87e61\" successfully" Sep 9 23:46:14.841911 containerd[1869]: time="2025-09-09T23:46:14.841883552Z" level=info msg="Ensure that sandbox da03f03840ad556630d2da975c0de70a75e4018fdbde2480e417120a1af87e61 in task-service has been cleanup successfully" Sep 9 23:46:14.851615 containerd[1869]: time="2025-09-09T23:46:14.851581726Z" level=info msg="RemovePodSandbox \"da03f03840ad556630d2da975c0de70a75e4018fdbde2480e417120a1af87e61\" returns successfully" Sep 9 23:46:14.916389 kubelet[3373]: E0909 23:46:14.916344 3373 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 23:46:15.229371 containerd[1869]: time="2025-09-09T23:46:15.229247046Z" level=info msg="CreateContainer within sandbox \"c8b1e8c8c077a9808ffbf9c4e74465c6cc75d1e1b667416673cfae842143d495\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 23:46:15.248957 containerd[1869]: time="2025-09-09T23:46:15.248712964Z" level=info msg="Container fa837fd3b9cb91a4a0e16d78eb131aa9de4512c6f18061c4499044eafff70db3: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:46:15.250749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2727076679.mount: Deactivated successfully. Sep 9 23:46:15.261686 sshd[5183]: Accepted publickey for core from 10.200.16.10 port 48410 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:46:15.263066 sshd-session[5183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:15.267022 systemd-logind[1852]: New session 27 of user core. Sep 9 23:46:15.270255 containerd[1869]: time="2025-09-09T23:46:15.270144193Z" level=info msg="CreateContainer within sandbox \"c8b1e8c8c077a9808ffbf9c4e74465c6cc75d1e1b667416673cfae842143d495\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fa837fd3b9cb91a4a0e16d78eb131aa9de4512c6f18061c4499044eafff70db3\"" Sep 9 23:46:15.270980 containerd[1869]: time="2025-09-09T23:46:15.270950191Z" level=info msg="StartContainer for \"fa837fd3b9cb91a4a0e16d78eb131aa9de4512c6f18061c4499044eafff70db3\"" Sep 9 23:46:15.272860 containerd[1869]: time="2025-09-09T23:46:15.272670919Z" level=info msg="connecting to shim fa837fd3b9cb91a4a0e16d78eb131aa9de4512c6f18061c4499044eafff70db3" address="unix:///run/containerd/s/213259cbbd167056633942b0c49383747fe7a219d07752eb0a129d70086f3772" protocol=ttrpc version=3 Sep 9 23:46:15.272938 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 9 23:46:15.295929 systemd[1]: Started cri-containerd-fa837fd3b9cb91a4a0e16d78eb131aa9de4512c6f18061c4499044eafff70db3.scope - libcontainer container fa837fd3b9cb91a4a0e16d78eb131aa9de4512c6f18061c4499044eafff70db3. Sep 9 23:46:15.329148 containerd[1869]: time="2025-09-09T23:46:15.329111571Z" level=info msg="StartContainer for \"fa837fd3b9cb91a4a0e16d78eb131aa9de4512c6f18061c4499044eafff70db3\" returns successfully" Sep 9 23:46:15.332787 systemd[1]: cri-containerd-fa837fd3b9cb91a4a0e16d78eb131aa9de4512c6f18061c4499044eafff70db3.scope: Deactivated successfully. Sep 9 23:46:15.334108 containerd[1869]: time="2025-09-09T23:46:15.334073925Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa837fd3b9cb91a4a0e16d78eb131aa9de4512c6f18061c4499044eafff70db3\" id:\"fa837fd3b9cb91a4a0e16d78eb131aa9de4512c6f18061c4499044eafff70db3\" pid:5205 exited_at:{seconds:1757461575 nanos:332655318}" Sep 9 23:46:15.334249 containerd[1869]: time="2025-09-09T23:46:15.334193633Z" level=info msg="received exit event container_id:\"fa837fd3b9cb91a4a0e16d78eb131aa9de4512c6f18061c4499044eafff70db3\" id:\"fa837fd3b9cb91a4a0e16d78eb131aa9de4512c6f18061c4499044eafff70db3\" pid:5205 exited_at:{seconds:1757461575 nanos:332655318}" Sep 9 23:46:15.350529 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa837fd3b9cb91a4a0e16d78eb131aa9de4512c6f18061c4499044eafff70db3-rootfs.mount: Deactivated successfully. Sep 9 23:46:16.233474 containerd[1869]: time="2025-09-09T23:46:16.232891626Z" level=info msg="CreateContainer within sandbox \"c8b1e8c8c077a9808ffbf9c4e74465c6cc75d1e1b667416673cfae842143d495\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 23:46:16.254326 containerd[1869]: time="2025-09-09T23:46:16.254191811Z" level=info msg="Container f45c88d3340de388fbc33732b4ddfaa0c9938d60df565ff9c732645298612a3c: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:46:16.255874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount333316645.mount: Deactivated successfully. Sep 9 23:46:16.274161 containerd[1869]: time="2025-09-09T23:46:16.274119518Z" level=info msg="CreateContainer within sandbox \"c8b1e8c8c077a9808ffbf9c4e74465c6cc75d1e1b667416673cfae842143d495\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f45c88d3340de388fbc33732b4ddfaa0c9938d60df565ff9c732645298612a3c\"" Sep 9 23:46:16.274874 containerd[1869]: time="2025-09-09T23:46:16.274740487Z" level=info msg="StartContainer for \"f45c88d3340de388fbc33732b4ddfaa0c9938d60df565ff9c732645298612a3c\"" Sep 9 23:46:16.280678 containerd[1869]: time="2025-09-09T23:46:16.280644228Z" level=info msg="connecting to shim f45c88d3340de388fbc33732b4ddfaa0c9938d60df565ff9c732645298612a3c" address="unix:///run/containerd/s/213259cbbd167056633942b0c49383747fe7a219d07752eb0a129d70086f3772" protocol=ttrpc version=3 Sep 9 23:46:16.301923 systemd[1]: Started cri-containerd-f45c88d3340de388fbc33732b4ddfaa0c9938d60df565ff9c732645298612a3c.scope - libcontainer container f45c88d3340de388fbc33732b4ddfaa0c9938d60df565ff9c732645298612a3c. Sep 9 23:46:16.329951 systemd[1]: cri-containerd-f45c88d3340de388fbc33732b4ddfaa0c9938d60df565ff9c732645298612a3c.scope: Deactivated successfully. Sep 9 23:46:16.331040 containerd[1869]: time="2025-09-09T23:46:16.331007423Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f45c88d3340de388fbc33732b4ddfaa0c9938d60df565ff9c732645298612a3c\" id:\"f45c88d3340de388fbc33732b4ddfaa0c9938d60df565ff9c732645298612a3c\" pid:5254 exited_at:{seconds:1757461576 nanos:330198112}" Sep 9 23:46:16.332567 containerd[1869]: time="2025-09-09T23:46:16.332471544Z" level=info msg="received exit event container_id:\"f45c88d3340de388fbc33732b4ddfaa0c9938d60df565ff9c732645298612a3c\" id:\"f45c88d3340de388fbc33732b4ddfaa0c9938d60df565ff9c732645298612a3c\" pid:5254 exited_at:{seconds:1757461576 nanos:330198112}" Sep 9 23:46:16.334248 containerd[1869]: time="2025-09-09T23:46:16.334226768Z" level=info msg="StartContainer for \"f45c88d3340de388fbc33732b4ddfaa0c9938d60df565ff9c732645298612a3c\" returns successfully" Sep 9 23:46:16.350988 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f45c88d3340de388fbc33732b4ddfaa0c9938d60df565ff9c732645298612a3c-rootfs.mount: Deactivated successfully. Sep 9 23:46:17.237812 containerd[1869]: time="2025-09-09T23:46:17.237728975Z" level=info msg="CreateContainer within sandbox \"c8b1e8c8c077a9808ffbf9c4e74465c6cc75d1e1b667416673cfae842143d495\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 23:46:17.261885 containerd[1869]: time="2025-09-09T23:46:17.260497801Z" level=info msg="Container 27f5e8b8385e1f7f70d718dabc2e2055c157e73da54cbeb1fb26a10b93a26bdb: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:46:17.263035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4271807433.mount: Deactivated successfully. Sep 9 23:46:17.282818 containerd[1869]: time="2025-09-09T23:46:17.281432032Z" level=info msg="CreateContainer within sandbox \"c8b1e8c8c077a9808ffbf9c4e74465c6cc75d1e1b667416673cfae842143d495\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"27f5e8b8385e1f7f70d718dabc2e2055c157e73da54cbeb1fb26a10b93a26bdb\"" Sep 9 23:46:17.284819 containerd[1869]: time="2025-09-09T23:46:17.284215709Z" level=info msg="StartContainer for \"27f5e8b8385e1f7f70d718dabc2e2055c157e73da54cbeb1fb26a10b93a26bdb\"" Sep 9 23:46:17.285128 containerd[1869]: time="2025-09-09T23:46:17.285108942Z" level=info msg="connecting to shim 27f5e8b8385e1f7f70d718dabc2e2055c157e73da54cbeb1fb26a10b93a26bdb" address="unix:///run/containerd/s/213259cbbd167056633942b0c49383747fe7a219d07752eb0a129d70086f3772" protocol=ttrpc version=3 Sep 9 23:46:17.312933 systemd[1]: Started cri-containerd-27f5e8b8385e1f7f70d718dabc2e2055c157e73da54cbeb1fb26a10b93a26bdb.scope - libcontainer container 27f5e8b8385e1f7f70d718dabc2e2055c157e73da54cbeb1fb26a10b93a26bdb. Sep 9 23:46:17.334011 systemd[1]: cri-containerd-27f5e8b8385e1f7f70d718dabc2e2055c157e73da54cbeb1fb26a10b93a26bdb.scope: Deactivated successfully. Sep 9 23:46:17.336133 containerd[1869]: time="2025-09-09T23:46:17.336100395Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27f5e8b8385e1f7f70d718dabc2e2055c157e73da54cbeb1fb26a10b93a26bdb\" id:\"27f5e8b8385e1f7f70d718dabc2e2055c157e73da54cbeb1fb26a10b93a26bdb\" pid:5295 exited_at:{seconds:1757461577 nanos:335205922}" Sep 9 23:46:17.339945 containerd[1869]: time="2025-09-09T23:46:17.339850947Z" level=info msg="received exit event container_id:\"27f5e8b8385e1f7f70d718dabc2e2055c157e73da54cbeb1fb26a10b93a26bdb\" id:\"27f5e8b8385e1f7f70d718dabc2e2055c157e73da54cbeb1fb26a10b93a26bdb\" pid:5295 exited_at:{seconds:1757461577 nanos:335205922}" Sep 9 23:46:17.345322 containerd[1869]: time="2025-09-09T23:46:17.345300595Z" level=info msg="StartContainer for \"27f5e8b8385e1f7f70d718dabc2e2055c157e73da54cbeb1fb26a10b93a26bdb\" returns successfully" Sep 9 23:46:17.355410 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27f5e8b8385e1f7f70d718dabc2e2055c157e73da54cbeb1fb26a10b93a26bdb-rootfs.mount: Deactivated successfully. Sep 9 23:46:18.242324 containerd[1869]: time="2025-09-09T23:46:18.242280359Z" level=info msg="CreateContainer within sandbox \"c8b1e8c8c077a9808ffbf9c4e74465c6cc75d1e1b667416673cfae842143d495\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 23:46:18.267139 containerd[1869]: time="2025-09-09T23:46:18.265247545Z" level=info msg="Container d4b247d67d08c482735030416f283659772b9b11aa0a0837f23885be99df7af9: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:46:18.266249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount908646395.mount: Deactivated successfully. Sep 9 23:46:18.284385 containerd[1869]: time="2025-09-09T23:46:18.284341214Z" level=info msg="CreateContainer within sandbox \"c8b1e8c8c077a9808ffbf9c4e74465c6cc75d1e1b667416673cfae842143d495\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d4b247d67d08c482735030416f283659772b9b11aa0a0837f23885be99df7af9\"" Sep 9 23:46:18.285178 containerd[1869]: time="2025-09-09T23:46:18.285018913Z" level=info msg="StartContainer for \"d4b247d67d08c482735030416f283659772b9b11aa0a0837f23885be99df7af9\"" Sep 9 23:46:18.287212 containerd[1869]: time="2025-09-09T23:46:18.287163285Z" level=info msg="connecting to shim d4b247d67d08c482735030416f283659772b9b11aa0a0837f23885be99df7af9" address="unix:///run/containerd/s/213259cbbd167056633942b0c49383747fe7a219d07752eb0a129d70086f3772" protocol=ttrpc version=3 Sep 9 23:46:18.308930 systemd[1]: Started cri-containerd-d4b247d67d08c482735030416f283659772b9b11aa0a0837f23885be99df7af9.scope - libcontainer container d4b247d67d08c482735030416f283659772b9b11aa0a0837f23885be99df7af9. Sep 9 23:46:18.338163 containerd[1869]: time="2025-09-09T23:46:18.338120229Z" level=info msg="StartContainer for \"d4b247d67d08c482735030416f283659772b9b11aa0a0837f23885be99df7af9\" returns successfully" Sep 9 23:46:18.417496 containerd[1869]: time="2025-09-09T23:46:18.417448597Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d4b247d67d08c482735030416f283659772b9b11aa0a0837f23885be99df7af9\" id:\"e6e2d81d41624c81e22e7d4e205c9d0aa08e4d8958d6e2c3ebc83352ec41f103\" pid:5362 exited_at:{seconds:1757461578 nanos:416895950}" Sep 9 23:46:18.502118 kubelet[3373]: I0909 23:46:18.501382 3373 setters.go:600] "Node became not ready" node="ci-4426.0.0-n-3e4141976f" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-09T23:46:18Z","lastTransitionTime":"2025-09-09T23:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 9 23:46:18.731817 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 9 23:46:19.686298 containerd[1869]: time="2025-09-09T23:46:19.686255191Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d4b247d67d08c482735030416f283659772b9b11aa0a0837f23885be99df7af9\" id:\"0ce2fa59e396115d60301e6433908c6435420a256aea43ab1498577a6faa00e5\" pid:5439 exit_status:1 exited_at:{seconds:1757461579 nanos:685553228}" Sep 9 23:46:21.236969 systemd-networkd[1687]: lxc_health: Link UP Sep 9 23:46:21.250889 systemd-networkd[1687]: lxc_health: Gained carrier Sep 9 23:46:21.848011 containerd[1869]: time="2025-09-09T23:46:21.847967895Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d4b247d67d08c482735030416f283659772b9b11aa0a0837f23885be99df7af9\" id:\"686f057d5d390c5abc06e857db9733c88d16d6935c80020ed4e93fca6000706d\" pid:5882 exited_at:{seconds:1757461581 nanos:847064222}" Sep 9 23:46:22.149152 kubelet[3373]: I0909 23:46:22.148988 3373 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-r5kd2" podStartSLOduration=9.148970201 podStartE2EDuration="9.148970201s" podCreationTimestamp="2025-09-09 23:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:46:19.259392281 +0000 UTC m=+184.609984636" watchObservedRunningTime="2025-09-09 23:46:22.148970201 +0000 UTC m=+187.499562580" Sep 9 23:46:22.415941 systemd-networkd[1687]: lxc_health: Gained IPv6LL Sep 9 23:46:23.989563 containerd[1869]: time="2025-09-09T23:46:23.989475898Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d4b247d67d08c482735030416f283659772b9b11aa0a0837f23885be99df7af9\" id:\"d7335af6c0f33863a498f9893a7f4d51d44254be42a6a06a2d900c2db283f363\" pid:5925 exited_at:{seconds:1757461583 nanos:988925747}" Sep 9 23:46:26.077958 containerd[1869]: time="2025-09-09T23:46:26.077834560Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d4b247d67d08c482735030416f283659772b9b11aa0a0837f23885be99df7af9\" id:\"de9a047e5e9dc5852fb439ac284e6b702ba7178b2f41b93c7c76ba6291019ffe\" pid:5948 exited_at:{seconds:1757461586 nanos:76809059}" Sep 9 23:46:28.162853 containerd[1869]: time="2025-09-09T23:46:28.162734257Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d4b247d67d08c482735030416f283659772b9b11aa0a0837f23885be99df7af9\" id:\"804d22ccd875f055d01d8d9610af9b92b4051affa34a2b3734f32a79dd05267f\" pid:5970 exited_at:{seconds:1757461588 nanos:161840616}" Sep 9 23:46:28.256851 sshd[5190]: Connection closed by 10.200.16.10 port 48410 Sep 9 23:46:28.257510 sshd-session[5183]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:28.260637 systemd[1]: sshd@24-10.200.20.12:22-10.200.16.10:48410.service: Deactivated successfully. Sep 9 23:46:28.264522 systemd[1]: session-27.scope: Deactivated successfully. Sep 9 23:46:28.265375 systemd-logind[1852]: Session 27 logged out. Waiting for processes to exit. Sep 9 23:46:28.267299 systemd-logind[1852]: Removed session 27.