Sep 4 23:44:22.423136 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 4 23:44:22.423160 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Thu Sep 4 22:21:25 -00 2025 Sep 4 23:44:22.423168 kernel: KASLR enabled Sep 4 23:44:22.423173 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Sep 4 23:44:22.423181 kernel: printk: bootconsole [pl11] enabled Sep 4 23:44:22.423186 kernel: efi: EFI v2.7 by EDK II Sep 4 23:44:22.423193 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3ead5018 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Sep 4 23:44:22.423199 kernel: random: crng init done Sep 4 23:44:22.423205 kernel: secureboot: Secure boot disabled Sep 4 23:44:22.423211 kernel: ACPI: Early table checksum verification disabled Sep 4 23:44:22.423217 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Sep 4 23:44:22.423223 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 23:44:22.423229 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 23:44:22.423236 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 4 23:44:22.423243 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 23:44:22.423249 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 23:44:22.423256 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 23:44:22.423263 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 23:44:22.423269 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 23:44:22.423276 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 23:44:22.423282 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Sep 4 23:44:22.423288 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 23:44:22.423294 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Sep 4 23:44:22.423300 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Sep 4 23:44:22.423307 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Sep 4 23:44:22.423313 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Sep 4 23:44:22.423319 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Sep 4 23:44:22.423325 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Sep 4 23:44:22.423358 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Sep 4 23:44:22.423365 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Sep 4 23:44:22.423371 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Sep 4 23:44:22.423378 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Sep 4 23:44:22.423384 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Sep 4 23:44:22.423393 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Sep 4 23:44:22.423399 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Sep 4 23:44:22.423405 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Sep 4 23:44:22.423411 kernel: Zone ranges: Sep 4 23:44:22.423417 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Sep 4 23:44:22.423423 kernel: DMA32 empty Sep 4 23:44:22.423430 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Sep 4 23:44:22.423440 kernel: Movable zone start for each node Sep 4 23:44:22.423447 kernel: Early memory node ranges Sep 4 23:44:22.423453 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Sep 4 23:44:22.423460 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Sep 4 23:44:22.423466 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Sep 4 23:44:22.423474 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Sep 4 23:44:22.423481 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Sep 4 23:44:22.423487 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Sep 4 23:44:22.423494 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Sep 4 23:44:22.423500 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Sep 4 23:44:22.423507 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Sep 4 23:44:22.423513 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Sep 4 23:44:22.423520 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Sep 4 23:44:22.423526 kernel: psci: probing for conduit method from ACPI. Sep 4 23:44:22.423533 kernel: psci: PSCIv1.1 detected in firmware. Sep 4 23:44:22.423539 kernel: psci: Using standard PSCI v0.2 function IDs Sep 4 23:44:22.423546 kernel: psci: MIGRATE_INFO_TYPE not supported. Sep 4 23:44:22.423554 kernel: psci: SMC Calling Convention v1.4 Sep 4 23:44:22.423560 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Sep 4 23:44:22.423567 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Sep 4 23:44:22.423573 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 4 23:44:22.423580 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 4 23:44:22.423587 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 4 23:44:22.423593 kernel: Detected PIPT I-cache on CPU0 Sep 4 23:44:22.423600 kernel: CPU features: detected: GIC system register CPU interface Sep 4 23:44:22.423607 kernel: CPU features: detected: Hardware dirty bit management Sep 4 23:44:22.423613 kernel: CPU features: detected: Spectre-BHB Sep 4 23:44:22.423620 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 4 23:44:22.423628 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 4 23:44:22.423634 kernel: CPU features: detected: ARM erratum 1418040 Sep 4 23:44:22.423641 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Sep 4 23:44:22.423647 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 4 23:44:22.423654 kernel: alternatives: applying boot alternatives Sep 4 23:44:22.423661 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=0304960b24e314f6095f7d8ad705a9bc0a9a4a34f7817da10ea634466a73d86e Sep 4 23:44:22.423668 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 23:44:22.423675 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 23:44:22.423682 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 23:44:22.423688 kernel: Fallback order for Node 0: 0 Sep 4 23:44:22.423695 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Sep 4 23:44:22.423703 kernel: Policy zone: Normal Sep 4 23:44:22.423710 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 23:44:22.423716 kernel: software IO TLB: area num 2. Sep 4 23:44:22.423723 kernel: software IO TLB: mapped [mem 0x0000000036530000-0x000000003a530000] (64MB) Sep 4 23:44:22.423729 kernel: Memory: 3983528K/4194160K available (10368K kernel code, 2186K rwdata, 8104K rodata, 38400K init, 897K bss, 210632K reserved, 0K cma-reserved) Sep 4 23:44:22.423736 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 23:44:22.423743 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 23:44:22.423750 kernel: rcu: RCU event tracing is enabled. Sep 4 23:44:22.423757 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 23:44:22.423763 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 23:44:22.423770 kernel: Tracing variant of Tasks RCU enabled. Sep 4 23:44:22.423778 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 23:44:22.423785 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 23:44:22.423792 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 4 23:44:22.423799 kernel: GICv3: 960 SPIs implemented Sep 4 23:44:22.423805 kernel: GICv3: 0 Extended SPIs implemented Sep 4 23:44:22.423811 kernel: Root IRQ handler: gic_handle_irq Sep 4 23:44:22.423818 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 4 23:44:22.423824 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Sep 4 23:44:22.423831 kernel: ITS: No ITS available, not enabling LPIs Sep 4 23:44:22.423838 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 23:44:22.423844 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 23:44:22.423851 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 4 23:44:22.423859 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 4 23:44:22.423866 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 4 23:44:22.423873 kernel: Console: colour dummy device 80x25 Sep 4 23:44:22.423879 kernel: printk: console [tty1] enabled Sep 4 23:44:22.423886 kernel: ACPI: Core revision 20230628 Sep 4 23:44:22.423893 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 4 23:44:22.423900 kernel: pid_max: default: 32768 minimum: 301 Sep 4 23:44:22.423907 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 4 23:44:22.423913 kernel: landlock: Up and running. Sep 4 23:44:22.423922 kernel: SELinux: Initializing. Sep 4 23:44:22.423928 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 23:44:22.423935 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 23:44:22.423942 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 23:44:22.423949 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 23:44:22.423955 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Sep 4 23:44:22.423962 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Sep 4 23:44:22.423976 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 4 23:44:22.423983 kernel: rcu: Hierarchical SRCU implementation. Sep 4 23:44:22.423990 kernel: rcu: Max phase no-delay instances is 400. Sep 4 23:44:22.423997 kernel: Remapping and enabling EFI services. Sep 4 23:44:22.424004 kernel: smp: Bringing up secondary CPUs ... Sep 4 23:44:22.424013 kernel: Detected PIPT I-cache on CPU1 Sep 4 23:44:22.424020 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Sep 4 23:44:22.424027 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 23:44:22.424034 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 4 23:44:22.424041 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 23:44:22.424050 kernel: SMP: Total of 2 processors activated. Sep 4 23:44:22.424057 kernel: CPU features: detected: 32-bit EL0 Support Sep 4 23:44:22.424064 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Sep 4 23:44:22.424071 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 4 23:44:22.424078 kernel: CPU features: detected: CRC32 instructions Sep 4 23:44:22.424086 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 4 23:44:22.424093 kernel: CPU features: detected: LSE atomic instructions Sep 4 23:44:22.424100 kernel: CPU features: detected: Privileged Access Never Sep 4 23:44:22.424107 kernel: CPU: All CPU(s) started at EL1 Sep 4 23:44:22.424115 kernel: alternatives: applying system-wide alternatives Sep 4 23:44:22.424122 kernel: devtmpfs: initialized Sep 4 23:44:22.424129 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 23:44:22.424137 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 23:44:22.424144 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 23:44:22.424151 kernel: SMBIOS 3.1.0 present. Sep 4 23:44:22.424158 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 4 23:44:22.424165 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 23:44:22.424173 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 4 23:44:22.424181 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 4 23:44:22.424188 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 4 23:44:22.424196 kernel: audit: initializing netlink subsys (disabled) Sep 4 23:44:22.424203 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Sep 4 23:44:22.424210 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 23:44:22.424217 kernel: cpuidle: using governor menu Sep 4 23:44:22.424224 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 4 23:44:22.424231 kernel: ASID allocator initialised with 32768 entries Sep 4 23:44:22.424238 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 23:44:22.424246 kernel: Serial: AMBA PL011 UART driver Sep 4 23:44:22.424253 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 4 23:44:22.424260 kernel: Modules: 0 pages in range for non-PLT usage Sep 4 23:44:22.424268 kernel: Modules: 509248 pages in range for PLT usage Sep 4 23:44:22.424275 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 23:44:22.424282 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 23:44:22.424289 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 4 23:44:22.424296 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 4 23:44:22.424304 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 23:44:22.424313 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 23:44:22.424320 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 4 23:44:22.424327 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 4 23:44:22.424343 kernel: ACPI: Added _OSI(Module Device) Sep 4 23:44:22.424350 kernel: ACPI: Added _OSI(Processor Device) Sep 4 23:44:22.424357 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 23:44:22.424364 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 23:44:22.424371 kernel: ACPI: Interpreter enabled Sep 4 23:44:22.424378 kernel: ACPI: Using GIC for interrupt routing Sep 4 23:44:22.424387 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Sep 4 23:44:22.424395 kernel: printk: console [ttyAMA0] enabled Sep 4 23:44:22.424401 kernel: printk: bootconsole [pl11] disabled Sep 4 23:44:22.424408 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Sep 4 23:44:22.424416 kernel: iommu: Default domain type: Translated Sep 4 23:44:22.424423 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 4 23:44:22.424430 kernel: efivars: Registered efivars operations Sep 4 23:44:22.424437 kernel: vgaarb: loaded Sep 4 23:44:22.424444 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 4 23:44:22.424452 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 23:44:22.424460 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 23:44:22.424467 kernel: pnp: PnP ACPI init Sep 4 23:44:22.424474 kernel: pnp: PnP ACPI: found 0 devices Sep 4 23:44:22.424481 kernel: NET: Registered PF_INET protocol family Sep 4 23:44:22.424488 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 23:44:22.424495 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 23:44:22.424502 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 23:44:22.424509 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 23:44:22.424518 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 23:44:22.424525 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 23:44:22.424532 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 23:44:22.424539 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 23:44:22.424546 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 23:44:22.424553 kernel: PCI: CLS 0 bytes, default 64 Sep 4 23:44:22.424560 kernel: kvm [1]: HYP mode not available Sep 4 23:44:22.424567 kernel: Initialise system trusted keyrings Sep 4 23:44:22.424574 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 23:44:22.424583 kernel: Key type asymmetric registered Sep 4 23:44:22.424590 kernel: Asymmetric key parser 'x509' registered Sep 4 23:44:22.424597 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 4 23:44:22.424604 kernel: io scheduler mq-deadline registered Sep 4 23:44:22.424611 kernel: io scheduler kyber registered Sep 4 23:44:22.424618 kernel: io scheduler bfq registered Sep 4 23:44:22.424625 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 23:44:22.424632 kernel: thunder_xcv, ver 1.0 Sep 4 23:44:22.424639 kernel: thunder_bgx, ver 1.0 Sep 4 23:44:22.424648 kernel: nicpf, ver 1.0 Sep 4 23:44:22.424655 kernel: nicvf, ver 1.0 Sep 4 23:44:22.424804 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 4 23:44:22.424877 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-04T23:44:21 UTC (1757029461) Sep 4 23:44:22.424886 kernel: efifb: probing for efifb Sep 4 23:44:22.424894 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 4 23:44:22.424901 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 4 23:44:22.424908 kernel: efifb: scrolling: redraw Sep 4 23:44:22.424918 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 4 23:44:22.424925 kernel: Console: switching to colour frame buffer device 128x48 Sep 4 23:44:22.424932 kernel: fb0: EFI VGA frame buffer device Sep 4 23:44:22.424939 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Sep 4 23:44:22.424946 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 23:44:22.424953 kernel: No ACPI PMU IRQ for CPU0 Sep 4 23:44:22.424960 kernel: No ACPI PMU IRQ for CPU1 Sep 4 23:44:22.424967 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Sep 4 23:44:22.424974 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 4 23:44:22.424983 kernel: watchdog: Hard watchdog permanently disabled Sep 4 23:44:22.424991 kernel: NET: Registered PF_INET6 protocol family Sep 4 23:44:22.424998 kernel: Segment Routing with IPv6 Sep 4 23:44:22.425005 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 23:44:22.425012 kernel: NET: Registered PF_PACKET protocol family Sep 4 23:44:22.425019 kernel: Key type dns_resolver registered Sep 4 23:44:22.425026 kernel: registered taskstats version 1 Sep 4 23:44:22.425033 kernel: Loading compiled-in X.509 certificates Sep 4 23:44:22.425040 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: 83306acb9da7bc81cc6aa49a1c622f78672939c0' Sep 4 23:44:22.425049 kernel: Key type .fscrypt registered Sep 4 23:44:22.425056 kernel: Key type fscrypt-provisioning registered Sep 4 23:44:22.425063 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 23:44:22.425070 kernel: ima: Allocated hash algorithm: sha1 Sep 4 23:44:22.425077 kernel: ima: No architecture policies found Sep 4 23:44:22.425084 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 4 23:44:22.425091 kernel: clk: Disabling unused clocks Sep 4 23:44:22.425098 kernel: Freeing unused kernel memory: 38400K Sep 4 23:44:22.425105 kernel: Run /init as init process Sep 4 23:44:22.425113 kernel: with arguments: Sep 4 23:44:22.425120 kernel: /init Sep 4 23:44:22.425127 kernel: with environment: Sep 4 23:44:22.425134 kernel: HOME=/ Sep 4 23:44:22.425141 kernel: TERM=linux Sep 4 23:44:22.425148 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 23:44:22.425157 systemd[1]: Successfully made /usr/ read-only. Sep 4 23:44:22.425167 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 23:44:22.425176 systemd[1]: Detected virtualization microsoft. Sep 4 23:44:22.425184 systemd[1]: Detected architecture arm64. Sep 4 23:44:22.425191 systemd[1]: Running in initrd. Sep 4 23:44:22.425199 systemd[1]: No hostname configured, using default hostname. Sep 4 23:44:22.425206 systemd[1]: Hostname set to . Sep 4 23:44:22.425214 systemd[1]: Initializing machine ID from random generator. Sep 4 23:44:22.425221 systemd[1]: Queued start job for default target initrd.target. Sep 4 23:44:22.425229 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:44:22.425238 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:44:22.425247 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 23:44:22.425255 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 23:44:22.425262 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 23:44:22.425271 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 23:44:22.425280 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 23:44:22.425289 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 23:44:22.425297 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:44:22.425305 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:44:22.425312 systemd[1]: Reached target paths.target - Path Units. Sep 4 23:44:22.425320 systemd[1]: Reached target slices.target - Slice Units. Sep 4 23:44:22.425336 systemd[1]: Reached target swap.target - Swaps. Sep 4 23:44:22.425346 systemd[1]: Reached target timers.target - Timer Units. Sep 4 23:44:22.425354 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 23:44:22.425361 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 23:44:22.425371 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 23:44:22.425379 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 4 23:44:22.425387 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:44:22.425395 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 23:44:22.425402 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:44:22.425410 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 23:44:22.425418 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 23:44:22.425425 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 23:44:22.425435 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 23:44:22.425443 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 23:44:22.425450 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 23:44:22.425458 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 23:44:22.425482 systemd-journald[218]: Collecting audit messages is disabled. Sep 4 23:44:22.425505 systemd-journald[218]: Journal started Sep 4 23:44:22.425522 systemd-journald[218]: Runtime Journal (/run/log/journal/c82dff3c62de4c709ca641793db0e394) is 8M, max 78.5M, 70.5M free. Sep 4 23:44:22.431653 systemd-modules-load[221]: Inserted module 'overlay' Sep 4 23:44:22.454631 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 23:44:22.459646 systemd-modules-load[221]: Inserted module 'br_netfilter' Sep 4 23:44:22.465754 kernel: Bridge firewalling registered Sep 4 23:44:22.486700 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:44:22.494343 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 23:44:22.501690 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 23:44:22.516766 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:44:22.525596 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 23:44:22.531768 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 23:44:22.547644 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:44:22.575643 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:44:22.586492 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:44:22.615770 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 23:44:22.629699 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 23:44:22.661617 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:44:22.674142 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:44:22.690138 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 23:44:22.705877 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:44:22.737576 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 23:44:22.746841 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 23:44:22.775758 dracut-cmdline[254]: dracut-dracut-053 Sep 4 23:44:22.782692 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 23:44:22.801883 dracut-cmdline[254]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=0304960b24e314f6095f7d8ad705a9bc0a9a4a34f7817da10ea634466a73d86e Sep 4 23:44:22.806143 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:44:22.852855 systemd-resolved[256]: Positive Trust Anchors: Sep 4 23:44:22.852865 systemd-resolved[256]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 23:44:22.852896 systemd-resolved[256]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 23:44:22.863786 systemd-resolved[256]: Defaulting to hostname 'linux'. Sep 4 23:44:22.865746 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 23:44:22.888818 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:44:23.008364 kernel: SCSI subsystem initialized Sep 4 23:44:23.016380 kernel: Loading iSCSI transport class v2.0-870. Sep 4 23:44:23.028366 kernel: iscsi: registered transport (tcp) Sep 4 23:44:23.047019 kernel: iscsi: registered transport (qla4xxx) Sep 4 23:44:23.047040 kernel: QLogic iSCSI HBA Driver Sep 4 23:44:23.081013 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 23:44:23.098598 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 23:44:23.133482 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 23:44:23.133546 kernel: device-mapper: uevent: version 1.0.3 Sep 4 23:44:23.139719 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 23:44:23.188354 kernel: raid6: neonx8 gen() 15792 MB/s Sep 4 23:44:23.208352 kernel: raid6: neonx4 gen() 15779 MB/s Sep 4 23:44:23.228343 kernel: raid6: neonx2 gen() 13226 MB/s Sep 4 23:44:23.249341 kernel: raid6: neonx1 gen() 10558 MB/s Sep 4 23:44:23.270341 kernel: raid6: int64x8 gen() 6798 MB/s Sep 4 23:44:23.290340 kernel: raid6: int64x4 gen() 7354 MB/s Sep 4 23:44:23.312341 kernel: raid6: int64x2 gen() 6114 MB/s Sep 4 23:44:23.337037 kernel: raid6: int64x1 gen() 5061 MB/s Sep 4 23:44:23.337048 kernel: raid6: using algorithm neonx8 gen() 15792 MB/s Sep 4 23:44:23.363248 kernel: raid6: .... xor() 11763 MB/s, rmw enabled Sep 4 23:44:23.363261 kernel: raid6: using neon recovery algorithm Sep 4 23:44:23.375837 kernel: xor: measuring software checksum speed Sep 4 23:44:23.375855 kernel: 8regs : 21550 MB/sec Sep 4 23:44:23.379903 kernel: 32regs : 21596 MB/sec Sep 4 23:44:23.383686 kernel: arm64_neon : 27984 MB/sec Sep 4 23:44:23.388663 kernel: xor: using function: arm64_neon (27984 MB/sec) Sep 4 23:44:23.440347 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 23:44:23.450675 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 23:44:23.468475 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:44:23.494638 systemd-udevd[441]: Using default interface naming scheme 'v255'. Sep 4 23:44:23.501474 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:44:23.520576 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 23:44:23.537318 dracut-pre-trigger[453]: rd.md=0: removing MD RAID activation Sep 4 23:44:23.569947 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 23:44:23.587605 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 23:44:23.637159 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:44:23.660667 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 23:44:23.691938 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 23:44:23.707250 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 23:44:23.725742 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:44:23.736803 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 23:44:23.769375 kernel: hv_vmbus: Vmbus version:5.3 Sep 4 23:44:23.770643 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 23:44:23.805607 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 4 23:44:23.805633 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 4 23:44:23.806936 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 23:44:23.860275 kernel: hv_vmbus: registering driver hv_storvsc Sep 4 23:44:23.860313 kernel: scsi host0: storvsc_host_t Sep 4 23:44:23.860581 kernel: scsi host1: storvsc_host_t Sep 4 23:44:23.860696 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 4 23:44:23.860708 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 4 23:44:23.807098 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:44:23.880527 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 4 23:44:23.839896 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:44:23.929397 kernel: hv_vmbus: registering driver hid_hyperv Sep 4 23:44:23.929423 kernel: hv_vmbus: registering driver hv_netvsc Sep 4 23:44:23.929433 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 4 23:44:23.929595 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 4 23:44:23.860528 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:44:23.964288 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 4 23:44:23.860839 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:44:23.915769 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:44:24.009454 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 4 23:44:24.009659 kernel: PTP clock support registered Sep 4 23:44:24.009671 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 4 23:44:23.972494 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:44:23.583293 kernel: hv_utils: Registering HyperV Utility Driver Sep 4 23:44:23.601275 kernel: hv_vmbus: registering driver hv_utils Sep 4 23:44:23.601294 kernel: hv_utils: Heartbeat IC version 3.0 Sep 4 23:44:23.601302 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 4 23:44:23.601444 kernel: hv_utils: Shutdown IC version 3.2 Sep 4 23:44:23.601452 kernel: hv_utils: TimeSync IC version 4.0 Sep 4 23:44:23.601461 kernel: hv_netvsc 00224877-ba22-0022-4877-ba2200224877 eth0: VF slot 1 added Sep 4 23:44:23.601556 systemd-journald[218]: Time jumped backwards, rotating. Sep 4 23:44:23.995830 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:44:23.996220 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 23:44:24.042859 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:44:24.042964 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:44:23.583123 systemd-resolved[256]: Clock change detected. Flushing caches. Sep 4 23:44:23.674959 kernel: hv_vmbus: registering driver hv_pci Sep 4 23:44:23.674996 kernel: hv_pci f9a810d6-2069-4f8e-b6b9-9966c5bb3dad: PCI VMBus probing: Using version 0x10004 Sep 4 23:44:23.675465 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 4 23:44:23.613734 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:44:23.746282 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 4 23:44:23.746505 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 4 23:44:23.746608 kernel: hv_pci f9a810d6-2069-4f8e-b6b9-9966c5bb3dad: PCI host bridge to bus 2069:00 Sep 4 23:44:23.746710 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 4 23:44:23.746803 kernel: pci_bus 2069:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Sep 4 23:44:23.746912 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 4 23:44:23.747004 kernel: pci_bus 2069:00: No busn resource found for root bus, will use [bus 00-ff] Sep 4 23:44:23.624529 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:44:23.781875 kernel: pci 2069:00:02.0: [15b3:1018] type 00 class 0x020000 Sep 4 23:44:23.781939 kernel: pci 2069:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 4 23:44:23.695261 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:44:23.797289 kernel: pci 2069:00:02.0: enabling Extended Tags Sep 4 23:44:23.808483 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 23:44:23.808545 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 4 23:44:23.808734 kernel: pci 2069:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 2069:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Sep 4 23:44:23.823635 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:44:23.854281 kernel: pci_bus 2069:00: busn_res: [bus 00-ff] end is updated to 00 Sep 4 23:44:23.864609 kernel: pci 2069:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 4 23:44:23.866998 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:44:23.929665 kernel: mlx5_core 2069:00:02.0: enabling device (0000 -> 0002) Sep 4 23:44:23.938203 kernel: mlx5_core 2069:00:02.0: firmware version: 16.31.2424 Sep 4 23:44:24.233530 kernel: hv_netvsc 00224877-ba22-0022-4877-ba2200224877 eth0: VF registering: eth1 Sep 4 23:44:24.233745 kernel: mlx5_core 2069:00:02.0 eth1: joined to eth0 Sep 4 23:44:24.245246 kernel: mlx5_core 2069:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Sep 4 23:44:24.258210 kernel: mlx5_core 2069:00:02.0 enP8297s1: renamed from eth1 Sep 4 23:44:24.577805 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 4 23:44:24.624396 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 4 23:44:24.659268 kernel: BTRFS: device fsid 74a5374f-334b-4c07-8952-82f9f0ad22ae devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (498) Sep 4 23:44:24.669380 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (493) Sep 4 23:44:24.682980 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 4 23:44:24.692279 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 4 23:44:24.731104 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 4 23:44:24.752356 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 23:44:24.788945 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 23:44:25.803242 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 23:44:25.804080 disk-uuid[608]: The operation has completed successfully. Sep 4 23:44:25.874355 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 23:44:25.874451 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 23:44:25.925341 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 23:44:25.943008 sh[694]: Success Sep 4 23:44:25.976279 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 4 23:44:26.364059 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 23:44:26.372720 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 23:44:26.391354 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 23:44:26.435058 kernel: BTRFS info (device dm-0): first mount of filesystem 74a5374f-334b-4c07-8952-82f9f0ad22ae Sep 4 23:44:26.435117 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 4 23:44:26.444920 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 23:44:26.451800 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 23:44:26.457555 kernel: BTRFS info (device dm-0): using free space tree Sep 4 23:44:26.865445 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 23:44:26.871923 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 23:44:26.885438 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 23:44:26.907431 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 23:44:26.966778 kernel: BTRFS info (device sda6): first mount of filesystem 6280ecaa-ba8f-4e5e-8483-db3a07084cf9 Sep 4 23:44:26.966841 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 23:44:26.972108 kernel: BTRFS info (device sda6): using free space tree Sep 4 23:44:27.015787 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 23:44:27.038339 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 23:44:27.061302 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 23:44:27.061325 kernel: BTRFS info (device sda6): last unmount of filesystem 6280ecaa-ba8f-4e5e-8483-db3a07084cf9 Sep 4 23:44:27.079907 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 23:44:27.096533 systemd-networkd[869]: lo: Link UP Sep 4 23:44:27.096545 systemd-networkd[869]: lo: Gained carrier Sep 4 23:44:27.100380 systemd-networkd[869]: Enumeration completed Sep 4 23:44:27.109521 systemd-networkd[869]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:44:27.109525 systemd-networkd[869]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 23:44:27.114646 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 23:44:27.122892 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 23:44:27.146465 systemd[1]: Reached target network.target - Network. Sep 4 23:44:27.236204 kernel: mlx5_core 2069:00:02.0 enP8297s1: Link up Sep 4 23:44:27.319209 kernel: hv_netvsc 00224877-ba22-0022-4877-ba2200224877 eth0: Data path switched to VF: enP8297s1 Sep 4 23:44:27.319654 systemd-networkd[869]: enP8297s1: Link UP Sep 4 23:44:27.319723 systemd-networkd[869]: eth0: Link UP Sep 4 23:44:27.319840 systemd-networkd[869]: eth0: Gained carrier Sep 4 23:44:27.319848 systemd-networkd[869]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:44:27.351838 systemd-networkd[869]: enP8297s1: Gained carrier Sep 4 23:44:27.366233 systemd-networkd[869]: eth0: DHCPv4 address 10.200.20.4/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 4 23:44:28.069853 ignition[877]: Ignition 2.20.0 Sep 4 23:44:28.069868 ignition[877]: Stage: fetch-offline Sep 4 23:44:28.075916 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 23:44:28.069922 ignition[877]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:44:28.069932 ignition[877]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 23:44:28.070059 ignition[877]: parsed url from cmdline: "" Sep 4 23:44:28.104471 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 23:44:28.070063 ignition[877]: no config URL provided Sep 4 23:44:28.070067 ignition[877]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 23:44:28.070075 ignition[877]: no config at "/usr/lib/ignition/user.ign" Sep 4 23:44:28.070080 ignition[877]: failed to fetch config: resource requires networking Sep 4 23:44:28.070431 ignition[877]: Ignition finished successfully Sep 4 23:44:28.130302 ignition[885]: Ignition 2.20.0 Sep 4 23:44:28.130311 ignition[885]: Stage: fetch Sep 4 23:44:28.131473 ignition[885]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:44:28.131487 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 23:44:28.131975 ignition[885]: parsed url from cmdline: "" Sep 4 23:44:28.131980 ignition[885]: no config URL provided Sep 4 23:44:28.131988 ignition[885]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 23:44:28.132010 ignition[885]: no config at "/usr/lib/ignition/user.ign" Sep 4 23:44:28.132046 ignition[885]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 4 23:44:28.259551 ignition[885]: GET result: OK Sep 4 23:44:28.259659 ignition[885]: config has been read from IMDS userdata Sep 4 23:44:28.259698 ignition[885]: parsing config with SHA512: 14c8b9d2396818013a0b26bd8e90642efb9a6bb0ec6547c68ef3bbe822c26edcde0d9d7b649fb0da30a6799fcf75d58f30ca5c5703b5cfc89472c935e0872a25 Sep 4 23:44:28.268065 unknown[885]: fetched base config from "system" Sep 4 23:44:28.268491 ignition[885]: fetch: fetch complete Sep 4 23:44:28.268073 unknown[885]: fetched base config from "system" Sep 4 23:44:28.268496 ignition[885]: fetch: fetch passed Sep 4 23:44:28.268078 unknown[885]: fetched user config from "azure" Sep 4 23:44:28.268549 ignition[885]: Ignition finished successfully Sep 4 23:44:28.270618 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 23:44:28.313854 ignition[892]: Ignition 2.20.0 Sep 4 23:44:28.290988 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 23:44:28.313861 ignition[892]: Stage: kargs Sep 4 23:44:28.324158 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 23:44:28.314039 ignition[892]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:44:28.349360 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 23:44:28.314048 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 23:44:28.366956 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 23:44:28.315111 ignition[892]: kargs: kargs passed Sep 4 23:44:28.379335 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 23:44:28.315169 ignition[892]: Ignition finished successfully Sep 4 23:44:28.394349 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 23:44:28.364051 ignition[898]: Ignition 2.20.0 Sep 4 23:44:28.409594 systemd-networkd[869]: eth0: Gained IPv6LL Sep 4 23:44:28.364057 ignition[898]: Stage: disks Sep 4 23:44:28.410309 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 23:44:28.364307 ignition[898]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:44:28.422136 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 23:44:28.364318 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 23:44:28.436291 systemd[1]: Reached target basic.target - Basic System. Sep 4 23:44:28.365695 ignition[898]: disks: disks passed Sep 4 23:44:28.463409 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 23:44:28.365765 ignition[898]: Ignition finished successfully Sep 4 23:44:28.582123 systemd-fsck[907]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Sep 4 23:44:28.592371 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 23:44:28.611354 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 23:44:28.679392 kernel: EXT4-fs (sda9): mounted filesystem 22b06923-f972-4753-b92e-d6b25ef15ca3 r/w with ordered data mode. Quota mode: none. Sep 4 23:44:28.674104 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 23:44:28.680843 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 23:44:28.730273 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 23:44:28.755368 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 23:44:28.782820 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (918) Sep 4 23:44:28.782847 kernel: BTRFS info (device sda6): first mount of filesystem 6280ecaa-ba8f-4e5e-8483-db3a07084cf9 Sep 4 23:44:28.773494 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 4 23:44:28.823333 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 23:44:28.823356 kernel: BTRFS info (device sda6): using free space tree Sep 4 23:44:28.806102 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 23:44:28.806143 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 23:44:28.815677 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 23:44:28.862514 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 23:44:28.889056 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 23:44:28.880962 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 23:44:29.532759 coreos-metadata[920]: Sep 04 23:44:29.532 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 4 23:44:29.545173 coreos-metadata[920]: Sep 04 23:44:29.545 INFO Fetch successful Sep 4 23:44:29.551372 coreos-metadata[920]: Sep 04 23:44:29.551 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 4 23:44:29.564959 coreos-metadata[920]: Sep 04 23:44:29.564 INFO Fetch successful Sep 4 23:44:29.579698 coreos-metadata[920]: Sep 04 23:44:29.579 INFO wrote hostname ci-4230.2.2-n-a8c1fd94a3 to /sysroot/etc/hostname Sep 4 23:44:29.590420 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 23:44:30.227064 initrd-setup-root[948]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 23:44:30.301491 initrd-setup-root[955]: cut: /sysroot/etc/group: No such file or directory Sep 4 23:44:30.328044 initrd-setup-root[962]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 23:44:30.341288 initrd-setup-root[969]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 23:44:31.576049 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 23:44:31.598385 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 23:44:31.627213 kernel: BTRFS info (device sda6): last unmount of filesystem 6280ecaa-ba8f-4e5e-8483-db3a07084cf9 Sep 4 23:44:31.627431 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 23:44:31.639065 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 23:44:31.671818 ignition[1036]: INFO : Ignition 2.20.0 Sep 4 23:44:31.671818 ignition[1036]: INFO : Stage: mount Sep 4 23:44:31.693960 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:44:31.693960 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 23:44:31.693960 ignition[1036]: INFO : mount: mount passed Sep 4 23:44:31.693960 ignition[1036]: INFO : Ignition finished successfully Sep 4 23:44:31.680932 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 23:44:31.711415 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 23:44:31.730223 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 23:44:31.766470 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 23:44:31.800215 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (1048) Sep 4 23:44:31.819273 kernel: BTRFS info (device sda6): first mount of filesystem 6280ecaa-ba8f-4e5e-8483-db3a07084cf9 Sep 4 23:44:31.819289 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 23:44:31.825203 kernel: BTRFS info (device sda6): using free space tree Sep 4 23:44:31.838200 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 23:44:31.840670 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 23:44:31.873214 ignition[1066]: INFO : Ignition 2.20.0 Sep 4 23:44:31.873214 ignition[1066]: INFO : Stage: files Sep 4 23:44:31.873214 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:44:31.873214 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 23:44:31.902700 ignition[1066]: DEBUG : files: compiled without relabeling support, skipping Sep 4 23:44:31.923891 ignition[1066]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 23:44:31.923891 ignition[1066]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 23:44:32.000564 ignition[1066]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 23:44:32.009430 ignition[1066]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 23:44:32.009430 ignition[1066]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 23:44:32.001030 unknown[1066]: wrote ssh authorized keys file for user: core Sep 4 23:44:32.033458 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 4 23:44:32.046223 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 4 23:44:32.080546 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 23:44:32.356393 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 4 23:44:32.368320 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 23:44:32.368320 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 4 23:44:32.554706 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 4 23:44:32.629861 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 23:44:32.629861 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 4 23:44:32.629861 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 23:44:32.629861 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 23:44:32.629861 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 23:44:32.629861 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 23:44:32.629861 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 23:44:32.629861 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 23:44:32.731257 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 23:44:32.731257 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 23:44:32.731257 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 23:44:32.731257 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 4 23:44:32.731257 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 4 23:44:32.731257 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 4 23:44:32.731257 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 4 23:44:32.981286 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 4 23:44:33.177753 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 4 23:44:33.192460 ignition[1066]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 4 23:44:33.232890 ignition[1066]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 23:44:33.247577 ignition[1066]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 23:44:33.247577 ignition[1066]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 4 23:44:33.247577 ignition[1066]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 4 23:44:33.247577 ignition[1066]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 23:44:33.247577 ignition[1066]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 23:44:33.247577 ignition[1066]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 23:44:33.247577 ignition[1066]: INFO : files: files passed Sep 4 23:44:33.247577 ignition[1066]: INFO : Ignition finished successfully Sep 4 23:44:33.358896 kernel: mlx5_core 2069:00:02.0: poll_health:835:(pid 1): device's health compromised - reached miss count Sep 4 23:44:33.248008 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 23:44:33.296495 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 23:44:33.318453 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 23:44:33.347419 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 23:44:33.411429 initrd-setup-root-after-ignition[1093]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:44:33.411429 initrd-setup-root-after-ignition[1093]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:44:33.347511 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 23:44:33.453018 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:44:33.370234 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 23:44:33.381694 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 23:44:33.411401 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 23:44:33.469511 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 23:44:33.469644 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 23:44:33.487752 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 23:44:33.501487 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 23:44:33.516678 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 23:44:33.519409 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 23:44:33.585095 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 23:44:33.611344 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 23:44:33.631485 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:44:33.640151 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:44:33.655603 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 23:44:33.668244 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 23:44:33.668371 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 23:44:33.687618 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 23:44:33.694863 systemd[1]: Stopped target basic.target - Basic System. Sep 4 23:44:33.708353 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 23:44:33.721452 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 23:44:33.735932 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 23:44:33.750868 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 23:44:33.763883 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 23:44:33.779701 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 23:44:33.792038 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 23:44:33.804931 systemd[1]: Stopped target swap.target - Swaps. Sep 4 23:44:33.815154 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 23:44:33.815285 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 23:44:33.835518 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:44:33.844189 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:44:33.861164 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 23:44:33.867800 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:44:33.876870 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 23:44:33.876991 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 23:44:33.892764 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 23:44:33.892879 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 23:44:33.902029 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 23:44:33.902129 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 23:44:33.997289 ignition[1118]: INFO : Ignition 2.20.0 Sep 4 23:44:33.997289 ignition[1118]: INFO : Stage: umount Sep 4 23:44:33.997289 ignition[1118]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:44:33.997289 ignition[1118]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 23:44:33.997289 ignition[1118]: INFO : umount: umount passed Sep 4 23:44:33.997289 ignition[1118]: INFO : Ignition finished successfully Sep 4 23:44:33.915911 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 4 23:44:33.916021 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 23:44:33.959462 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 23:44:33.979486 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 23:44:33.979675 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:44:33.992468 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 23:44:34.002787 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 23:44:34.002952 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:44:34.014604 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 23:44:34.014727 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 23:44:34.035553 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 23:44:34.035651 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 23:44:34.053499 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 23:44:34.053619 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 23:44:34.065395 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 23:44:34.065460 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 23:44:34.085334 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 23:44:34.085401 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 23:44:34.094799 systemd[1]: Stopped target network.target - Network. Sep 4 23:44:34.099960 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 23:44:34.100031 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 23:44:34.113807 systemd[1]: Stopped target paths.target - Path Units. Sep 4 23:44:34.125086 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 23:44:34.128215 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:44:34.139338 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 23:44:34.150857 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 23:44:34.163415 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 23:44:34.163479 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 23:44:34.176520 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 23:44:34.176564 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 23:44:34.189401 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 23:44:34.189462 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 23:44:34.202799 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 23:44:34.202853 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 23:44:34.215639 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 23:44:34.227073 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 23:44:34.243631 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 23:44:34.243731 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 23:44:34.265272 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 4 23:44:34.265753 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 23:44:34.265905 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 23:44:34.285992 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 4 23:44:34.286308 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 23:44:34.286526 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 23:44:34.303958 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 23:44:34.304050 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:44:34.648832 kernel: hv_netvsc 00224877-ba22-0022-4877-ba2200224877 eth0: Data path switched from VF: enP8297s1 Sep 4 23:44:34.346417 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 23:44:34.363095 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 23:44:34.363240 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 23:44:34.378449 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 23:44:34.378511 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:44:34.398115 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 23:44:34.398206 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 23:44:34.406144 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 23:44:34.406220 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:44:34.427643 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:44:34.447786 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 23:44:34.447895 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 4 23:44:34.447945 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:44:34.448529 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 23:44:34.448623 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 23:44:34.493661 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 23:44:34.495029 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:44:34.512776 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 23:44:34.512864 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 23:44:34.528164 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 23:44:34.528216 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:44:34.545244 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 23:44:34.545318 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 23:44:34.567126 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 23:44:34.567222 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 23:44:34.589316 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 23:44:34.589392 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:44:34.620813 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 23:44:34.620892 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 23:44:34.663148 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 23:44:34.681599 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 23:44:34.681687 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:44:34.703046 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:44:34.703110 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:44:34.717068 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 4 23:44:34.717136 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:44:34.717504 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 23:44:34.717615 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 23:44:34.728104 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 23:44:34.728244 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 23:44:34.741955 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 23:44:34.771405 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 23:44:34.807451 systemd[1]: Switching root. Sep 4 23:44:35.065719 systemd-journald[218]: Journal stopped Sep 4 23:44:43.510818 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Sep 4 23:44:43.510859 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 23:44:43.510871 kernel: SELinux: policy capability open_perms=1 Sep 4 23:44:43.510884 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 23:44:43.510891 kernel: SELinux: policy capability always_check_network=0 Sep 4 23:44:43.510899 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 23:44:43.510908 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 23:44:43.510916 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 23:44:43.510924 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 23:44:43.510932 kernel: audit: type=1403 audit(1757029476.422:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 23:44:43.510942 systemd[1]: Successfully loaded SELinux policy in 329.610ms. Sep 4 23:44:43.510952 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.807ms. Sep 4 23:44:43.510962 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 23:44:43.510971 systemd[1]: Detected virtualization microsoft. Sep 4 23:44:43.510980 systemd[1]: Detected architecture arm64. Sep 4 23:44:43.510991 systemd[1]: Detected first boot. Sep 4 23:44:43.511000 systemd[1]: Hostname set to . Sep 4 23:44:43.511009 systemd[1]: Initializing machine ID from random generator. Sep 4 23:44:43.511018 zram_generator::config[1163]: No configuration found. Sep 4 23:44:43.511027 kernel: NET: Registered PF_VSOCK protocol family Sep 4 23:44:43.511035 systemd[1]: Populated /etc with preset unit settings. Sep 4 23:44:43.511047 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 4 23:44:43.511056 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 23:44:43.511065 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 23:44:43.511073 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 23:44:43.511082 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 23:44:43.511092 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 23:44:43.511101 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 23:44:43.511111 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 23:44:43.511121 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 23:44:43.511131 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 23:44:43.511140 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 23:44:43.511149 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 23:44:43.511158 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:44:43.511168 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:44:43.511177 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 23:44:43.511209 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 23:44:43.511225 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 23:44:43.511234 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 23:44:43.511244 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 4 23:44:43.511255 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:44:43.511264 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 23:44:43.511274 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 23:44:43.511283 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 23:44:43.511294 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 23:44:43.511306 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:44:43.511315 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 23:44:43.511325 systemd[1]: Reached target slices.target - Slice Units. Sep 4 23:44:43.511334 systemd[1]: Reached target swap.target - Swaps. Sep 4 23:44:43.511343 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 23:44:43.511352 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 23:44:43.511362 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 4 23:44:43.511373 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:44:43.511383 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 23:44:43.511393 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:44:43.511402 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 23:44:43.511411 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 23:44:43.511422 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 23:44:43.511432 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 23:44:43.511441 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 23:44:43.511450 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 23:44:43.511460 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 23:44:43.511470 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 23:44:43.511479 systemd[1]: Reached target machines.target - Containers. Sep 4 23:44:43.511489 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 23:44:43.511499 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:44:43.511511 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 23:44:43.511520 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 23:44:43.511530 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:44:43.511539 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 23:44:43.511548 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:44:43.511558 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 23:44:43.511567 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:44:43.511577 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 23:44:43.511588 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 23:44:43.511597 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 23:44:43.511607 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 23:44:43.511616 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 23:44:43.511625 kernel: fuse: init (API version 7.39) Sep 4 23:44:43.511634 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:44:43.511643 kernel: loop: module loaded Sep 4 23:44:43.511652 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 23:44:43.511663 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 23:44:43.511672 kernel: ACPI: bus type drm_connector registered Sep 4 23:44:43.511681 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 23:44:43.511690 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 23:44:43.511700 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 4 23:44:43.511742 systemd-journald[1260]: Collecting audit messages is disabled. Sep 4 23:44:43.511764 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 23:44:43.511775 systemd-journald[1260]: Journal started Sep 4 23:44:43.511795 systemd-journald[1260]: Runtime Journal (/run/log/journal/5c91e624a9094d39bdcac3904cf83f26) is 8M, max 78.5M, 70.5M free. Sep 4 23:44:42.343417 systemd[1]: Queued start job for default target multi-user.target. Sep 4 23:44:42.351004 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 4 23:44:42.351400 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 23:44:42.351731 systemd[1]: systemd-journald.service: Consumed 4.036s CPU time. Sep 4 23:44:43.537718 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 23:44:43.537778 systemd[1]: Stopped verity-setup.service. Sep 4 23:44:43.558936 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 23:44:43.561470 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 23:44:43.568392 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 23:44:43.576168 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 23:44:43.582412 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 23:44:43.589625 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 23:44:43.596852 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 23:44:43.605309 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 23:44:43.613158 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:44:43.621317 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 23:44:43.621483 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 23:44:43.629342 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:44:43.629503 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:44:43.637322 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 23:44:43.637490 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 23:44:43.644503 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:44:43.646240 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:44:43.654435 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 23:44:43.654621 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 23:44:43.661535 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:44:43.661702 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:44:43.668611 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 23:44:43.675663 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 23:44:43.683889 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 23:44:43.691675 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 4 23:44:43.699668 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:44:43.716401 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 23:44:43.731329 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 23:44:43.739019 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 23:44:43.745655 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 23:44:43.745698 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 23:44:43.752796 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 4 23:44:43.768347 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 23:44:43.778427 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 23:44:43.784760 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:44:43.787389 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 23:44:43.797737 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 23:44:43.804840 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 23:44:43.805975 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 23:44:43.812491 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 23:44:43.814408 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:44:43.823434 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 23:44:43.835523 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 23:44:43.846541 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 23:44:43.856876 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 23:44:43.866852 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 23:44:43.876010 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 23:44:43.883731 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 23:44:43.886326 systemd-journald[1260]: Time spent on flushing to /var/log/journal/5c91e624a9094d39bdcac3904cf83f26 is 1.006306s for 919 entries. Sep 4 23:44:43.886326 systemd-journald[1260]: System Journal (/var/log/journal/5c91e624a9094d39bdcac3904cf83f26) is 11.8M, max 2.6G, 2.6G free. Sep 4 23:44:46.130444 systemd-journald[1260]: Received client request to flush runtime journal. Sep 4 23:44:46.130514 kernel: loop0: detected capacity change from 0 to 123192 Sep 4 23:44:46.130534 systemd-journald[1260]: /var/log/journal/5c91e624a9094d39bdcac3904cf83f26/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Sep 4 23:44:46.130557 systemd-journald[1260]: Rotating system journal. Sep 4 23:44:43.902428 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 23:44:43.913468 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 4 23:44:43.921300 udevadm[1306]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 4 23:44:44.664838 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:44:46.132077 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 23:44:46.225031 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 23:44:46.227254 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 4 23:44:47.401216 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 23:44:47.468224 kernel: loop1: detected capacity change from 0 to 211168 Sep 4 23:44:47.554485 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 23:44:47.566374 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 23:44:47.905202 kernel: loop2: detected capacity change from 0 to 28720 Sep 4 23:44:48.248699 systemd-tmpfiles[1324]: ACLs are not supported, ignoring. Sep 4 23:44:48.248721 systemd-tmpfiles[1324]: ACLs are not supported, ignoring. Sep 4 23:44:48.254243 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:44:49.818219 kernel: loop3: detected capacity change from 0 to 113512 Sep 4 23:44:51.326695 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 23:44:51.350385 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:44:51.379588 systemd-udevd[1330]: Using default interface naming scheme 'v255'. Sep 4 23:44:51.511223 kernel: loop4: detected capacity change from 0 to 123192 Sep 4 23:44:52.081221 kernel: loop5: detected capacity change from 0 to 211168 Sep 4 23:44:52.284214 kernel: loop6: detected capacity change from 0 to 28720 Sep 4 23:44:52.302220 kernel: loop7: detected capacity change from 0 to 113512 Sep 4 23:44:52.314046 (sd-merge)[1332]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Sep 4 23:44:52.314525 (sd-merge)[1332]: Merged extensions into '/usr'. Sep 4 23:44:52.318859 systemd[1]: Reload requested from client PID 1303 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 23:44:52.319006 systemd[1]: Reloading... Sep 4 23:44:52.390214 zram_generator::config[1359]: No configuration found. Sep 4 23:44:52.527467 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:44:52.600788 systemd[1]: Reloading finished in 281 ms. Sep 4 23:44:52.616208 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 23:44:52.636411 systemd[1]: Starting ensure-sysext.service... Sep 4 23:44:52.642100 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 23:44:52.679881 systemd[1]: Reload requested from client PID 1415 ('systemctl') (unit ensure-sysext.service)... Sep 4 23:44:52.680016 systemd[1]: Reloading... Sep 4 23:44:52.702760 systemd-tmpfiles[1416]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 23:44:52.702990 systemd-tmpfiles[1416]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 23:44:52.703678 systemd-tmpfiles[1416]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 23:44:52.703899 systemd-tmpfiles[1416]: ACLs are not supported, ignoring. Sep 4 23:44:52.703949 systemd-tmpfiles[1416]: ACLs are not supported, ignoring. Sep 4 23:44:52.754245 zram_generator::config[1443]: No configuration found. Sep 4 23:44:52.768009 systemd-tmpfiles[1416]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 23:44:52.768025 systemd-tmpfiles[1416]: Skipping /boot Sep 4 23:44:52.776851 systemd-tmpfiles[1416]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 23:44:52.776869 systemd-tmpfiles[1416]: Skipping /boot Sep 4 23:44:52.927521 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:44:53.051392 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 4 23:44:53.051491 systemd[1]: Reloading finished in 371 ms. Sep 4 23:44:53.053220 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 23:44:53.058700 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:44:53.076246 kernel: hv_vmbus: registering driver hv_balloon Sep 4 23:44:53.078634 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 4 23:44:53.086601 kernel: hv_balloon: Memory hot add disabled on ARM64 Sep 4 23:44:53.095842 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:44:53.111262 kernel: hv_vmbus: registering driver hyperv_fb Sep 4 23:44:53.126747 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 4 23:44:53.127149 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 4 23:44:53.133306 kernel: Console: switching to colour dummy device 80x25 Sep 4 23:44:53.139228 kernel: Console: switching to colour frame buffer device 128x48 Sep 4 23:44:53.156055 systemd[1]: Finished ensure-sysext.service. Sep 4 23:44:53.173552 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 23:44:53.207319 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1479) Sep 4 23:44:53.210579 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 23:44:53.219512 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:44:53.224341 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:44:53.245635 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 23:44:53.263673 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:44:53.279604 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:44:53.291931 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:44:53.292100 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:44:53.301584 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 23:44:53.325605 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 23:44:53.345824 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 23:44:53.355127 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 23:44:53.373478 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 23:44:53.382030 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:44:53.393958 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:44:53.395022 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:44:53.403804 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 23:44:53.403975 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 23:44:53.413760 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:44:53.413942 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:44:53.423623 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:44:53.423790 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:44:53.457682 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 23:44:53.470230 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 23:44:53.498990 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 4 23:44:53.517763 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 23:44:53.529438 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 23:44:53.540483 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 23:44:53.540563 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 23:44:53.542257 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 23:44:53.585642 lvm[1638]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 23:44:53.591409 augenrules[1643]: No rules Sep 4 23:44:53.592641 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 23:44:53.593303 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 23:44:53.622457 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 23:44:53.639274 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 23:44:53.647830 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:44:53.664491 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 23:44:53.681215 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 23:44:53.683097 lvm[1655]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 23:44:53.713713 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 23:44:53.799969 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 23:44:53.881349 systemd-networkd[1609]: lo: Link UP Sep 4 23:44:53.881360 systemd-networkd[1609]: lo: Gained carrier Sep 4 23:44:53.883480 systemd-networkd[1609]: Enumeration completed Sep 4 23:44:53.883646 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 23:44:53.883822 systemd-networkd[1609]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:44:53.883825 systemd-networkd[1609]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 23:44:53.896493 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 4 23:44:53.906570 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 23:44:53.962221 kernel: mlx5_core 2069:00:02.0 enP8297s1: Link up Sep 4 23:44:54.005256 systemd-resolved[1617]: Positive Trust Anchors: Sep 4 23:44:54.005276 systemd-resolved[1617]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 23:44:54.005307 systemd-resolved[1617]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 23:44:54.009210 kernel: hv_netvsc 00224877-ba22-0022-4877-ba2200224877 eth0: Data path switched to VF: enP8297s1 Sep 4 23:44:54.011441 systemd-networkd[1609]: enP8297s1: Link UP Sep 4 23:44:54.011710 systemd-networkd[1609]: eth0: Link UP Sep 4 23:44:54.011808 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 4 23:44:54.011919 systemd-networkd[1609]: eth0: Gained carrier Sep 4 23:44:54.011944 systemd-networkd[1609]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:44:54.026756 systemd-networkd[1609]: enP8297s1: Gained carrier Sep 4 23:44:54.034268 systemd-networkd[1609]: eth0: DHCPv4 address 10.200.20.4/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 4 23:44:54.054437 systemd-resolved[1617]: Using system hostname 'ci-4230.2.2-n-a8c1fd94a3'. Sep 4 23:44:54.056153 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 23:44:54.063619 systemd[1]: Reached target network.target - Network. Sep 4 23:44:54.070133 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:44:55.114052 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:44:55.939138 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 23:44:55.949157 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 23:44:56.055356 systemd-networkd[1609]: eth0: Gained IPv6LL Sep 4 23:44:56.057737 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 23:44:56.066705 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 23:44:59.604287 ldconfig[1298]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 23:44:59.618002 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 23:44:59.630436 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 23:44:59.659655 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 23:44:59.666468 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 23:44:59.672520 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 23:44:59.679527 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 23:44:59.687806 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 23:44:59.694959 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 23:44:59.703407 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 23:44:59.711671 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 23:44:59.711702 systemd[1]: Reached target paths.target - Path Units. Sep 4 23:44:59.717116 systemd[1]: Reached target timers.target - Timer Units. Sep 4 23:44:59.745697 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 23:44:59.754456 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 23:44:59.761950 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 4 23:44:59.769511 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 4 23:44:59.776877 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 4 23:44:59.789895 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 23:44:59.797846 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 4 23:44:59.806336 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 23:44:59.813542 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 23:44:59.819062 systemd[1]: Reached target basic.target - Basic System. Sep 4 23:44:59.825538 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 23:44:59.825566 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 23:44:59.952299 systemd[1]: Starting chronyd.service - NTP client/server... Sep 4 23:44:59.961443 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 23:44:59.980402 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 23:44:59.989759 (chronyd)[1676]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Sep 4 23:44:59.991808 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 23:44:59.999464 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 23:45:00.008394 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 23:45:00.018235 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 23:45:00.018281 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Sep 4 23:45:00.020473 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Sep 4 23:45:00.027017 chronyd[1688]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Sep 4 23:45:00.027638 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Sep 4 23:45:00.029318 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:00.030560 jq[1683]: false Sep 4 23:45:00.038032 KVP[1686]: KVP starting; pid is:1686 Sep 4 23:45:00.045241 KVP[1686]: KVP LIC Version: 3.1 Sep 4 23:45:00.046211 kernel: hv_utils: KVP IC version 4.0 Sep 4 23:45:00.047846 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 23:45:00.056243 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 23:45:00.064111 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 23:45:00.074740 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 23:45:00.082358 chronyd[1688]: Timezone right/UTC failed leap second check, ignoring Sep 4 23:45:00.082589 chronyd[1688]: Loaded seccomp filter (level 2) Sep 4 23:45:00.084412 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 23:45:00.095220 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 23:45:00.103440 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 23:45:00.104039 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 23:45:00.108460 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 23:45:00.117406 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 23:45:00.124975 extend-filesystems[1684]: Found loop4 Sep 4 23:45:00.141204 extend-filesystems[1684]: Found loop5 Sep 4 23:45:00.141204 extend-filesystems[1684]: Found loop6 Sep 4 23:45:00.141204 extend-filesystems[1684]: Found loop7 Sep 4 23:45:00.141204 extend-filesystems[1684]: Found sda Sep 4 23:45:00.141204 extend-filesystems[1684]: Found sda1 Sep 4 23:45:00.141204 extend-filesystems[1684]: Found sda2 Sep 4 23:45:00.141204 extend-filesystems[1684]: Found sda3 Sep 4 23:45:00.141204 extend-filesystems[1684]: Found usr Sep 4 23:45:00.141204 extend-filesystems[1684]: Found sda4 Sep 4 23:45:00.141204 extend-filesystems[1684]: Found sda6 Sep 4 23:45:00.141204 extend-filesystems[1684]: Found sda7 Sep 4 23:45:00.141204 extend-filesystems[1684]: Found sda9 Sep 4 23:45:00.141204 extend-filesystems[1684]: Checking size of /dev/sda9 Sep 4 23:45:00.127341 systemd[1]: Started chronyd.service - NTP client/server. Sep 4 23:45:00.364052 update_engine[1699]: I20250904 23:45:00.246336 1699 main.cc:92] Flatcar Update Engine starting Sep 4 23:45:00.364295 extend-filesystems[1684]: Old size kept for /dev/sda9 Sep 4 23:45:00.364295 extend-filesystems[1684]: Found sr0 Sep 4 23:45:00.380454 jq[1701]: true Sep 4 23:45:00.138645 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 23:45:00.139447 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 23:45:00.145955 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 23:45:00.384951 tar[1708]: linux-arm64/LICENSE Sep 4 23:45:00.384951 tar[1708]: linux-arm64/helm Sep 4 23:45:00.146175 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 23:45:00.388756 jq[1713]: true Sep 4 23:45:00.177374 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 23:45:00.177839 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 23:45:00.209639 (ntainerd)[1714]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 23:45:00.389303 bash[1741]: Updated "/home/core/.ssh/authorized_keys" Sep 4 23:45:00.226965 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 23:45:00.242923 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 23:45:00.244253 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 23:45:00.263539 systemd-logind[1697]: New seat seat0. Sep 4 23:45:00.264389 systemd-logind[1697]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 23:45:00.264918 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 23:45:00.388160 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 23:45:00.405640 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 23:45:00.424265 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1753) Sep 4 23:45:00.497619 dbus-daemon[1682]: [system] SELinux support is enabled Sep 4 23:45:00.499470 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 23:45:00.516241 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 23:45:00.516836 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 23:45:00.529115 update_engine[1699]: I20250904 23:45:00.529063 1699 update_check_scheduler.cc:74] Next update check in 3m36s Sep 4 23:45:00.531624 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 23:45:00.531651 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 23:45:00.549270 systemd[1]: Started update-engine.service - Update Engine. Sep 4 23:45:00.555870 dbus-daemon[1682]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 4 23:45:00.573580 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 23:45:00.633060 coreos-metadata[1678]: Sep 04 23:45:00.628 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 4 23:45:00.642087 coreos-metadata[1678]: Sep 04 23:45:00.641 INFO Fetch successful Sep 4 23:45:00.642087 coreos-metadata[1678]: Sep 04 23:45:00.642 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 4 23:45:00.651032 coreos-metadata[1678]: Sep 04 23:45:00.650 INFO Fetch successful Sep 4 23:45:00.651032 coreos-metadata[1678]: Sep 04 23:45:00.651 INFO Fetching http://168.63.129.16/machine/b03cece3-47db-44bf-8f7f-0871ba4bc6b5/9cbfc39b%2D7b7b%2D469e%2Da108%2D412f96ae1f5c.%5Fci%2D4230.2.2%2Dn%2Da8c1fd94a3?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 4 23:45:00.658907 coreos-metadata[1678]: Sep 04 23:45:00.658 INFO Fetch successful Sep 4 23:45:00.658907 coreos-metadata[1678]: Sep 04 23:45:00.658 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 4 23:45:00.677241 coreos-metadata[1678]: Sep 04 23:45:00.676 INFO Fetch successful Sep 4 23:45:00.711589 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 23:45:00.724177 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 23:45:00.888879 locksmithd[1803]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 23:45:00.915221 containerd[1714]: time="2025-09-04T23:45:00.914405040Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 4 23:45:01.003311 containerd[1714]: time="2025-09-04T23:45:01.002749120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:45:01.006804 containerd[1714]: time="2025-09-04T23:45:01.006748160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:45:01.006804 containerd[1714]: time="2025-09-04T23:45:01.006795920Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 23:45:01.006931 containerd[1714]: time="2025-09-04T23:45:01.006816000Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 23:45:01.006995 containerd[1714]: time="2025-09-04T23:45:01.006971240Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 23:45:01.007024 containerd[1714]: time="2025-09-04T23:45:01.006994360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 23:45:01.007080 containerd[1714]: time="2025-09-04T23:45:01.007058400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:45:01.007080 containerd[1714]: time="2025-09-04T23:45:01.007078880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:45:01.007328 containerd[1714]: time="2025-09-04T23:45:01.007306560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:45:01.007353 containerd[1714]: time="2025-09-04T23:45:01.007328640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 23:45:01.007353 containerd[1714]: time="2025-09-04T23:45:01.007344400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:45:01.007397 containerd[1714]: time="2025-09-04T23:45:01.007353320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 23:45:01.007473 containerd[1714]: time="2025-09-04T23:45:01.007441080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:45:01.009195 containerd[1714]: time="2025-09-04T23:45:01.007647840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:45:01.009195 containerd[1714]: time="2025-09-04T23:45:01.007781360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:45:01.009195 containerd[1714]: time="2025-09-04T23:45:01.007795360Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 23:45:01.009195 containerd[1714]: time="2025-09-04T23:45:01.007868680Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 23:45:01.009195 containerd[1714]: time="2025-09-04T23:45:01.007906600Z" level=info msg="metadata content store policy set" policy=shared Sep 4 23:45:01.026303 containerd[1714]: time="2025-09-04T23:45:01.026237760Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 23:45:01.026488 containerd[1714]: time="2025-09-04T23:45:01.026332520Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 23:45:01.026488 containerd[1714]: time="2025-09-04T23:45:01.026384720Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 23:45:01.026488 containerd[1714]: time="2025-09-04T23:45:01.026414200Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 23:45:01.026488 containerd[1714]: time="2025-09-04T23:45:01.026435400Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 23:45:01.028198 containerd[1714]: time="2025-09-04T23:45:01.026629720Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 23:45:01.028198 containerd[1714]: time="2025-09-04T23:45:01.028077040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 23:45:01.029380 containerd[1714]: time="2025-09-04T23:45:01.029227160Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 23:45:01.029380 containerd[1714]: time="2025-09-04T23:45:01.029267000Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 23:45:01.029380 containerd[1714]: time="2025-09-04T23:45:01.029303520Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 23:45:01.029380 containerd[1714]: time="2025-09-04T23:45:01.029322640Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 23:45:01.029380 containerd[1714]: time="2025-09-04T23:45:01.029339440Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 23:45:01.029527 containerd[1714]: time="2025-09-04T23:45:01.029458400Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 23:45:01.029527 containerd[1714]: time="2025-09-04T23:45:01.029483440Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 23:45:01.029527 containerd[1714]: time="2025-09-04T23:45:01.029501800Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 23:45:01.030259 containerd[1714]: time="2025-09-04T23:45:01.029520200Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 23:45:01.030298 containerd[1714]: time="2025-09-04T23:45:01.030267960Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 23:45:01.030298 containerd[1714]: time="2025-09-04T23:45:01.030289840Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 23:45:01.031953 containerd[1714]: time="2025-09-04T23:45:01.030334480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 23:45:01.031953 containerd[1714]: time="2025-09-04T23:45:01.030361680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 23:45:01.031953 containerd[1714]: time="2025-09-04T23:45:01.030381000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 23:45:01.031953 containerd[1714]: time="2025-09-04T23:45:01.030412840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 23:45:01.031953 containerd[1714]: time="2025-09-04T23:45:01.030426800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 23:45:01.031953 containerd[1714]: time="2025-09-04T23:45:01.030443200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 23:45:01.031953 containerd[1714]: time="2025-09-04T23:45:01.030482320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 23:45:01.031953 containerd[1714]: time="2025-09-04T23:45:01.030502680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 23:45:01.031953 containerd[1714]: time="2025-09-04T23:45:01.030519000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 23:45:01.031953 containerd[1714]: time="2025-09-04T23:45:01.030540880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 23:45:01.031953 containerd[1714]: time="2025-09-04T23:45:01.030572040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 23:45:01.031953 containerd[1714]: time="2025-09-04T23:45:01.030589160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 23:45:01.031953 containerd[1714]: time="2025-09-04T23:45:01.031029000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 23:45:01.031953 containerd[1714]: time="2025-09-04T23:45:01.031079880Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 23:45:01.031953 containerd[1714]: time="2025-09-04T23:45:01.031116280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 23:45:01.032380 containerd[1714]: time="2025-09-04T23:45:01.031135440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 23:45:01.032380 containerd[1714]: time="2025-09-04T23:45:01.031835640Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 23:45:01.032380 containerd[1714]: time="2025-09-04T23:45:01.031951360Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 23:45:01.032380 containerd[1714]: time="2025-09-04T23:45:01.031995560Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 23:45:01.032380 containerd[1714]: time="2025-09-04T23:45:01.032013520Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 23:45:01.032380 containerd[1714]: time="2025-09-04T23:45:01.032027240Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 23:45:01.032380 containerd[1714]: time="2025-09-04T23:45:01.032038440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 23:45:01.032380 containerd[1714]: time="2025-09-04T23:45:01.032056320Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 23:45:01.032380 containerd[1714]: time="2025-09-04T23:45:01.032081680Z" level=info msg="NRI interface is disabled by configuration." Sep 4 23:45:01.032380 containerd[1714]: time="2025-09-04T23:45:01.032098480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 23:45:01.035031 containerd[1714]: time="2025-09-04T23:45:01.033232080Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 23:45:01.035031 containerd[1714]: time="2025-09-04T23:45:01.033328640Z" level=info msg="Connect containerd service" Sep 4 23:45:01.035031 containerd[1714]: time="2025-09-04T23:45:01.033809400Z" level=info msg="using legacy CRI server" Sep 4 23:45:01.035031 containerd[1714]: time="2025-09-04T23:45:01.033828440Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 23:45:01.035031 containerd[1714]: time="2025-09-04T23:45:01.034464880Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 23:45:01.040105 containerd[1714]: time="2025-09-04T23:45:01.037160600Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 23:45:01.040105 containerd[1714]: time="2025-09-04T23:45:01.037688240Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 23:45:01.040105 containerd[1714]: time="2025-09-04T23:45:01.038528720Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 23:45:01.040105 containerd[1714]: time="2025-09-04T23:45:01.037822640Z" level=info msg="Start subscribing containerd event" Sep 4 23:45:01.040105 containerd[1714]: time="2025-09-04T23:45:01.038591040Z" level=info msg="Start recovering state" Sep 4 23:45:01.040105 containerd[1714]: time="2025-09-04T23:45:01.038815960Z" level=info msg="Start event monitor" Sep 4 23:45:01.040105 containerd[1714]: time="2025-09-04T23:45:01.038836440Z" level=info msg="Start snapshots syncer" Sep 4 23:45:01.040105 containerd[1714]: time="2025-09-04T23:45:01.038848840Z" level=info msg="Start cni network conf syncer for default" Sep 4 23:45:01.040105 containerd[1714]: time="2025-09-04T23:45:01.038859960Z" level=info msg="Start streaming server" Sep 4 23:45:01.039047 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 23:45:01.051482 containerd[1714]: time="2025-09-04T23:45:01.051425080Z" level=info msg="containerd successfully booted in 0.144433s" Sep 4 23:45:01.251124 tar[1708]: linux-arm64/README.md Sep 4 23:45:01.270591 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 23:45:01.435555 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:01.443468 (kubelet)[1842]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:45:01.550959 sshd_keygen[1716]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 23:45:01.575426 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 23:45:01.588478 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 23:45:01.598391 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Sep 4 23:45:01.611714 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 23:45:01.611910 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 23:45:01.640942 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 23:45:01.651419 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Sep 4 23:45:01.669579 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 23:45:01.690723 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 23:45:01.714595 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 4 23:45:01.722922 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 23:45:01.729615 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 23:45:01.738071 systemd[1]: Startup finished in 732ms (kernel) + 14.792s (initrd) + 25.644s (userspace) = 41.169s. Sep 4 23:45:02.020734 kubelet[1842]: E0904 23:45:02.020619 1842 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:45:02.023586 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:45:02.023850 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:45:02.024285 systemd[1]: kubelet.service: Consumed 747ms CPU time, 263M memory peak. Sep 4 23:45:02.350601 login[1869]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Sep 4 23:45:02.373764 login[1870]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:02.383546 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 23:45:02.393449 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 23:45:02.395594 systemd-logind[1697]: New session 2 of user core. Sep 4 23:45:02.419233 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 23:45:02.425438 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 23:45:02.443773 (systemd)[1879]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 23:45:02.446221 systemd-logind[1697]: New session c1 of user core. Sep 4 23:45:02.824884 systemd[1879]: Queued start job for default target default.target. Sep 4 23:45:02.836614 systemd[1879]: Created slice app.slice - User Application Slice. Sep 4 23:45:02.836645 systemd[1879]: Reached target paths.target - Paths. Sep 4 23:45:02.836682 systemd[1879]: Reached target timers.target - Timers. Sep 4 23:45:02.838311 systemd[1879]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 23:45:02.848874 systemd[1879]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 23:45:02.849119 systemd[1879]: Reached target sockets.target - Sockets. Sep 4 23:45:02.849198 systemd[1879]: Reached target basic.target - Basic System. Sep 4 23:45:02.849233 systemd[1879]: Reached target default.target - Main User Target. Sep 4 23:45:02.849259 systemd[1879]: Startup finished in 396ms. Sep 4 23:45:02.849490 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 23:45:02.861369 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 23:45:03.350990 login[1869]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:03.357039 systemd-logind[1697]: New session 1 of user core. Sep 4 23:45:03.363416 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 23:45:04.219213 waagent[1866]: 2025-09-04T23:45:04.213612Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Sep 4 23:45:04.219877 waagent[1866]: 2025-09-04T23:45:04.219791Z INFO Daemon Daemon OS: flatcar 4230.2.2 Sep 4 23:45:04.224975 waagent[1866]: 2025-09-04T23:45:04.224911Z INFO Daemon Daemon Python: 3.11.11 Sep 4 23:45:04.231060 waagent[1866]: 2025-09-04T23:45:04.230849Z INFO Daemon Daemon Run daemon Sep 4 23:45:04.236008 waagent[1866]: 2025-09-04T23:45:04.235947Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4230.2.2' Sep 4 23:45:04.248236 waagent[1866]: 2025-09-04T23:45:04.248151Z INFO Daemon Daemon Using waagent for provisioning Sep 4 23:45:04.253994 waagent[1866]: 2025-09-04T23:45:04.253941Z INFO Daemon Daemon Activate resource disk Sep 4 23:45:04.259065 waagent[1866]: 2025-09-04T23:45:04.258998Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 4 23:45:04.273442 waagent[1866]: 2025-09-04T23:45:04.273357Z INFO Daemon Daemon Found device: None Sep 4 23:45:04.280270 waagent[1866]: 2025-09-04T23:45:04.280203Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 4 23:45:04.290250 waagent[1866]: 2025-09-04T23:45:04.290172Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 4 23:45:04.303746 waagent[1866]: 2025-09-04T23:45:04.303693Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 4 23:45:04.310128 waagent[1866]: 2025-09-04T23:45:04.310068Z INFO Daemon Daemon Running default provisioning handler Sep 4 23:45:04.322364 waagent[1866]: 2025-09-04T23:45:04.322264Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Sep 4 23:45:04.340271 waagent[1866]: 2025-09-04T23:45:04.340168Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 4 23:45:04.353883 waagent[1866]: 2025-09-04T23:45:04.353808Z INFO Daemon Daemon cloud-init is enabled: False Sep 4 23:45:04.360696 waagent[1866]: 2025-09-04T23:45:04.360632Z INFO Daemon Daemon Copying ovf-env.xml Sep 4 23:45:04.498412 waagent[1866]: 2025-09-04T23:45:04.498258Z INFO Daemon Daemon Successfully mounted dvd Sep 4 23:45:04.530224 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 4 23:45:04.530552 waagent[1866]: 2025-09-04T23:45:04.530467Z INFO Daemon Daemon Detect protocol endpoint Sep 4 23:45:04.536259 waagent[1866]: 2025-09-04T23:45:04.536171Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 4 23:45:04.542996 waagent[1866]: 2025-09-04T23:45:04.542925Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 4 23:45:04.550580 waagent[1866]: 2025-09-04T23:45:04.550517Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 4 23:45:04.556491 waagent[1866]: 2025-09-04T23:45:04.556429Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 4 23:45:04.562272 waagent[1866]: 2025-09-04T23:45:04.562212Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 4 23:45:04.628484 waagent[1866]: 2025-09-04T23:45:04.628430Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 4 23:45:04.635894 waagent[1866]: 2025-09-04T23:45:04.635862Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 4 23:45:04.642606 waagent[1866]: 2025-09-04T23:45:04.642546Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 4 23:45:04.916712 waagent[1866]: 2025-09-04T23:45:04.916554Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 4 23:45:04.924781 waagent[1866]: 2025-09-04T23:45:04.924708Z INFO Daemon Daemon Forcing an update of the goal state. Sep 4 23:45:04.935893 waagent[1866]: 2025-09-04T23:45:04.935834Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 4 23:45:05.005645 waagent[1866]: 2025-09-04T23:45:05.005589Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Sep 4 23:45:05.012729 waagent[1866]: 2025-09-04T23:45:05.012677Z INFO Daemon Sep 4 23:45:05.016369 waagent[1866]: 2025-09-04T23:45:05.016299Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: a8b4efb3-107a-4bef-9b74-7738ecfe5e18 eTag: 17377647882103623358 source: Fabric] Sep 4 23:45:05.030738 waagent[1866]: 2025-09-04T23:45:05.030686Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Sep 4 23:45:05.039866 waagent[1866]: 2025-09-04T23:45:05.039814Z INFO Daemon Sep 4 23:45:05.043507 waagent[1866]: 2025-09-04T23:45:05.043449Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Sep 4 23:45:05.056529 waagent[1866]: 2025-09-04T23:45:05.056484Z INFO Daemon Daemon Downloading artifacts profile blob Sep 4 23:45:05.148265 waagent[1866]: 2025-09-04T23:45:05.148043Z INFO Daemon Downloaded certificate {'thumbprint': '6D0D8150B6B9758E0FF32B6C032C2BB2E3272782', 'hasPrivateKey': True} Sep 4 23:45:05.160646 waagent[1866]: 2025-09-04T23:45:05.160582Z INFO Daemon Fetch goal state completed Sep 4 23:45:05.172694 waagent[1866]: 2025-09-04T23:45:05.172580Z INFO Daemon Daemon Starting provisioning Sep 4 23:45:05.181278 waagent[1866]: 2025-09-04T23:45:05.181095Z INFO Daemon Daemon Handle ovf-env.xml. Sep 4 23:45:05.187358 waagent[1866]: 2025-09-04T23:45:05.187291Z INFO Daemon Daemon Set hostname [ci-4230.2.2-n-a8c1fd94a3] Sep 4 23:45:05.540275 waagent[1866]: 2025-09-04T23:45:05.540172Z INFO Daemon Daemon Publish hostname [ci-4230.2.2-n-a8c1fd94a3] Sep 4 23:45:05.546793 waagent[1866]: 2025-09-04T23:45:05.546727Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 4 23:45:05.553116 waagent[1866]: 2025-09-04T23:45:05.553060Z INFO Daemon Daemon Primary interface is [eth0] Sep 4 23:45:05.565452 systemd-networkd[1609]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:45:05.565464 systemd-networkd[1609]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 23:45:05.565493 systemd-networkd[1609]: eth0: DHCP lease lost Sep 4 23:45:05.566676 waagent[1866]: 2025-09-04T23:45:05.566591Z INFO Daemon Daemon Create user account if not exists Sep 4 23:45:05.573832 waagent[1866]: 2025-09-04T23:45:05.573769Z INFO Daemon Daemon User core already exists, skip useradd Sep 4 23:45:05.579951 waagent[1866]: 2025-09-04T23:45:05.579891Z INFO Daemon Daemon Configure sudoer Sep 4 23:45:05.585978 waagent[1866]: 2025-09-04T23:45:05.585912Z INFO Daemon Daemon Configure sshd Sep 4 23:45:05.590875 waagent[1866]: 2025-09-04T23:45:05.590808Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Sep 4 23:45:05.604017 waagent[1866]: 2025-09-04T23:45:05.603942Z INFO Daemon Daemon Deploy ssh public key. Sep 4 23:45:05.636231 systemd-networkd[1609]: eth0: DHCPv4 address 10.200.20.4/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 4 23:45:06.771788 waagent[1866]: 2025-09-04T23:45:06.771732Z INFO Daemon Daemon Provisioning complete Sep 4 23:45:06.791595 waagent[1866]: 2025-09-04T23:45:06.791546Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 4 23:45:06.798152 waagent[1866]: 2025-09-04T23:45:06.798082Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 4 23:45:06.808518 waagent[1866]: 2025-09-04T23:45:06.808460Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Sep 4 23:45:06.950686 waagent[1929]: 2025-09-04T23:45:06.950610Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Sep 4 23:45:06.951607 waagent[1929]: 2025-09-04T23:45:06.951141Z INFO ExtHandler ExtHandler OS: flatcar 4230.2.2 Sep 4 23:45:06.951607 waagent[1929]: 2025-09-04T23:45:06.951256Z INFO ExtHandler ExtHandler Python: 3.11.11 Sep 4 23:45:07.022774 waagent[1929]: 2025-09-04T23:45:07.022603Z INFO ExtHandler ExtHandler Distro: flatcar-4230.2.2; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 4 23:45:07.022917 waagent[1929]: 2025-09-04T23:45:07.022876Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 4 23:45:07.022981 waagent[1929]: 2025-09-04T23:45:07.022951Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 4 23:45:07.032897 waagent[1929]: 2025-09-04T23:45:07.032784Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 4 23:45:07.041622 waagent[1929]: 2025-09-04T23:45:07.041560Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 4 23:45:07.042353 waagent[1929]: 2025-09-04T23:45:07.042289Z INFO ExtHandler Sep 4 23:45:07.042445 waagent[1929]: 2025-09-04T23:45:07.042408Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: e44b2da8-bd50-4ba6-8402-d7c1c9296457 eTag: 17377647882103623358 source: Fabric] Sep 4 23:45:07.042809 waagent[1929]: 2025-09-04T23:45:07.042763Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 4 23:45:07.043550 waagent[1929]: 2025-09-04T23:45:07.043491Z INFO ExtHandler Sep 4 23:45:07.043634 waagent[1929]: 2025-09-04T23:45:07.043598Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 4 23:45:07.048475 waagent[1929]: 2025-09-04T23:45:07.048426Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 4 23:45:07.128793 waagent[1929]: 2025-09-04T23:45:07.128679Z INFO ExtHandler Downloaded certificate {'thumbprint': '6D0D8150B6B9758E0FF32B6C032C2BB2E3272782', 'hasPrivateKey': True} Sep 4 23:45:07.129462 waagent[1929]: 2025-09-04T23:45:07.129411Z INFO ExtHandler Fetch goal state completed Sep 4 23:45:07.147300 waagent[1929]: 2025-09-04T23:45:07.147231Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1929 Sep 4 23:45:07.147472 waagent[1929]: 2025-09-04T23:45:07.147433Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Sep 4 23:45:07.149291 waagent[1929]: 2025-09-04T23:45:07.149237Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4230.2.2', '', 'Flatcar Container Linux by Kinvolk'] Sep 4 23:45:07.149697 waagent[1929]: 2025-09-04T23:45:07.149654Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 4 23:45:07.224716 waagent[1929]: 2025-09-04T23:45:07.224666Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 4 23:45:07.224936 waagent[1929]: 2025-09-04T23:45:07.224891Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 4 23:45:07.231268 waagent[1929]: 2025-09-04T23:45:07.230789Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 4 23:45:07.237456 systemd[1]: Reload requested from client PID 1942 ('systemctl') (unit waagent.service)... Sep 4 23:45:07.237765 systemd[1]: Reloading... Sep 4 23:45:07.345222 zram_generator::config[1987]: No configuration found. Sep 4 23:45:07.441587 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:45:07.543748 systemd[1]: Reloading finished in 305 ms. Sep 4 23:45:07.558472 waagent[1929]: 2025-09-04T23:45:07.557934Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Sep 4 23:45:07.566137 systemd[1]: Reload requested from client PID 2038 ('systemctl') (unit waagent.service)... Sep 4 23:45:07.566319 systemd[1]: Reloading... Sep 4 23:45:07.659241 zram_generator::config[2080]: No configuration found. Sep 4 23:45:07.765691 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:45:07.865153 systemd[1]: Reloading finished in 298 ms. Sep 4 23:45:07.880412 waagent[1929]: 2025-09-04T23:45:07.880250Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Sep 4 23:45:07.880512 waagent[1929]: 2025-09-04T23:45:07.880419Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Sep 4 23:45:08.369220 waagent[1929]: 2025-09-04T23:45:08.368422Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Sep 4 23:45:08.369220 waagent[1929]: 2025-09-04T23:45:08.369063Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Sep 4 23:45:08.370029 waagent[1929]: 2025-09-04T23:45:08.369933Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 4 23:45:08.370631 waagent[1929]: 2025-09-04T23:45:08.370390Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 4 23:45:08.370631 waagent[1929]: 2025-09-04T23:45:08.370578Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 4 23:45:08.370929 waagent[1929]: 2025-09-04T23:45:08.370877Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 4 23:45:08.371898 waagent[1929]: 2025-09-04T23:45:08.371041Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 4 23:45:08.371898 waagent[1929]: 2025-09-04T23:45:08.371236Z INFO EnvHandler ExtHandler Configure routes Sep 4 23:45:08.371898 waagent[1929]: 2025-09-04T23:45:08.371325Z INFO EnvHandler ExtHandler Gateway:None Sep 4 23:45:08.371898 waagent[1929]: 2025-09-04T23:45:08.371375Z INFO EnvHandler ExtHandler Routes:None Sep 4 23:45:08.372225 waagent[1929]: 2025-09-04T23:45:08.372154Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 4 23:45:08.372562 waagent[1929]: 2025-09-04T23:45:08.372512Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 4 23:45:08.372744 waagent[1929]: 2025-09-04T23:45:08.372702Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 4 23:45:08.373021 waagent[1929]: 2025-09-04T23:45:08.372975Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 4 23:45:08.373021 waagent[1929]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 4 23:45:08.373021 waagent[1929]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Sep 4 23:45:08.373021 waagent[1929]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 4 23:45:08.373021 waagent[1929]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 4 23:45:08.373021 waagent[1929]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 4 23:45:08.373021 waagent[1929]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 4 23:45:08.373817 waagent[1929]: 2025-09-04T23:45:08.373764Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 4 23:45:08.376913 waagent[1929]: 2025-09-04T23:45:08.376838Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 4 23:45:08.377070 waagent[1929]: 2025-09-04T23:45:08.377015Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 4 23:45:08.377327 waagent[1929]: 2025-09-04T23:45:08.377261Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 4 23:45:08.383885 waagent[1929]: 2025-09-04T23:45:08.383824Z INFO ExtHandler ExtHandler Sep 4 23:45:08.384367 waagent[1929]: 2025-09-04T23:45:08.384322Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 51e94e63-ef2a-4bae-8e05-0fa551e320be correlation 353f4088-5bcf-4df7-89c6-2f1d86d04858 created: 2025-09-04T23:43:27.029800Z] Sep 4 23:45:08.384914 waagent[1929]: 2025-09-04T23:45:08.384858Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 4 23:45:08.385856 waagent[1929]: 2025-09-04T23:45:08.385788Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Sep 4 23:45:08.426099 waagent[1929]: 2025-09-04T23:45:08.425957Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 24D76EA6-555D-4273-B36B-EB44D85ED53D;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Sep 4 23:45:08.472178 waagent[1929]: 2025-09-04T23:45:08.471698Z INFO MonitorHandler ExtHandler Network interfaces: Sep 4 23:45:08.472178 waagent[1929]: Executing ['ip', '-a', '-o', 'link']: Sep 4 23:45:08.472178 waagent[1929]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 4 23:45:08.472178 waagent[1929]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:77:ba:22 brd ff:ff:ff:ff:ff:ff Sep 4 23:45:08.472178 waagent[1929]: 3: enP8297s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:77:ba:22 brd ff:ff:ff:ff:ff:ff\ altname enP8297p0s2 Sep 4 23:45:08.472178 waagent[1929]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 4 23:45:08.472178 waagent[1929]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 4 23:45:08.472178 waagent[1929]: 2: eth0 inet 10.200.20.4/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 4 23:45:08.472178 waagent[1929]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 4 23:45:08.472178 waagent[1929]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Sep 4 23:45:08.472178 waagent[1929]: 2: eth0 inet6 fe80::222:48ff:fe77:ba22/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 4 23:45:08.553252 waagent[1929]: 2025-09-04T23:45:08.552816Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Sep 4 23:45:08.553252 waagent[1929]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 4 23:45:08.553252 waagent[1929]: pkts bytes target prot opt in out source destination Sep 4 23:45:08.553252 waagent[1929]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 4 23:45:08.553252 waagent[1929]: pkts bytes target prot opt in out source destination Sep 4 23:45:08.553252 waagent[1929]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 4 23:45:08.553252 waagent[1929]: pkts bytes target prot opt in out source destination Sep 4 23:45:08.553252 waagent[1929]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 4 23:45:08.553252 waagent[1929]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 4 23:45:08.553252 waagent[1929]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 4 23:45:08.555969 waagent[1929]: 2025-09-04T23:45:08.555895Z INFO EnvHandler ExtHandler Current Firewall rules: Sep 4 23:45:08.555969 waagent[1929]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 4 23:45:08.555969 waagent[1929]: pkts bytes target prot opt in out source destination Sep 4 23:45:08.555969 waagent[1929]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 4 23:45:08.555969 waagent[1929]: pkts bytes target prot opt in out source destination Sep 4 23:45:08.555969 waagent[1929]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 4 23:45:08.555969 waagent[1929]: pkts bytes target prot opt in out source destination Sep 4 23:45:08.555969 waagent[1929]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 4 23:45:08.555969 waagent[1929]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 4 23:45:08.555969 waagent[1929]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 4 23:45:08.556329 waagent[1929]: 2025-09-04T23:45:08.556219Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 4 23:45:12.205347 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 23:45:12.211443 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:12.339033 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:12.353551 (kubelet)[2170]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:45:12.439645 kubelet[2170]: E0904 23:45:12.439287 2170 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:45:12.443284 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:45:12.443572 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:45:12.443984 systemd[1]: kubelet.service: Consumed 142ms CPU time, 105M memory peak. Sep 4 23:45:13.962119 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 23:45:13.964302 systemd[1]: Started sshd@0-10.200.20.4:22-10.200.16.10:44034.service - OpenSSH per-connection server daemon (10.200.16.10:44034). Sep 4 23:45:14.579581 sshd[2178]: Accepted publickey for core from 10.200.16.10 port 44034 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:45:14.580867 sshd-session[2178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:14.586122 systemd-logind[1697]: New session 3 of user core. Sep 4 23:45:14.591458 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 23:45:15.031465 systemd[1]: Started sshd@1-10.200.20.4:22-10.200.16.10:44036.service - OpenSSH per-connection server daemon (10.200.16.10:44036). Sep 4 23:45:15.524687 sshd[2183]: Accepted publickey for core from 10.200.16.10 port 44036 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:45:15.526129 sshd-session[2183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:15.532583 systemd-logind[1697]: New session 4 of user core. Sep 4 23:45:15.539413 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 23:45:15.887327 sshd[2185]: Connection closed by 10.200.16.10 port 44036 Sep 4 23:45:15.887955 sshd-session[2183]: pam_unix(sshd:session): session closed for user core Sep 4 23:45:15.891483 systemd[1]: sshd@1-10.200.20.4:22-10.200.16.10:44036.service: Deactivated successfully. Sep 4 23:45:15.893087 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 23:45:15.893792 systemd-logind[1697]: Session 4 logged out. Waiting for processes to exit. Sep 4 23:45:15.895091 systemd-logind[1697]: Removed session 4. Sep 4 23:45:15.975655 systemd[1]: Started sshd@2-10.200.20.4:22-10.200.16.10:44052.service - OpenSSH per-connection server daemon (10.200.16.10:44052). Sep 4 23:45:16.434657 sshd[2191]: Accepted publickey for core from 10.200.16.10 port 44052 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:45:16.435986 sshd-session[2191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:16.440411 systemd-logind[1697]: New session 5 of user core. Sep 4 23:45:16.451367 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 23:45:16.775117 sshd[2193]: Connection closed by 10.200.16.10 port 44052 Sep 4 23:45:16.775021 sshd-session[2191]: pam_unix(sshd:session): session closed for user core Sep 4 23:45:16.778334 systemd[1]: sshd@2-10.200.20.4:22-10.200.16.10:44052.service: Deactivated successfully. Sep 4 23:45:16.779896 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 23:45:16.780590 systemd-logind[1697]: Session 5 logged out. Waiting for processes to exit. Sep 4 23:45:16.781641 systemd-logind[1697]: Removed session 5. Sep 4 23:45:16.865443 systemd[1]: Started sshd@3-10.200.20.4:22-10.200.16.10:44066.service - OpenSSH per-connection server daemon (10.200.16.10:44066). Sep 4 23:45:17.318475 sshd[2199]: Accepted publickey for core from 10.200.16.10 port 44066 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:45:17.319694 sshd-session[2199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:17.325235 systemd-logind[1697]: New session 6 of user core. Sep 4 23:45:17.327363 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 23:45:17.658229 sshd[2201]: Connection closed by 10.200.16.10 port 44066 Sep 4 23:45:17.658799 sshd-session[2199]: pam_unix(sshd:session): session closed for user core Sep 4 23:45:17.662420 systemd[1]: sshd@3-10.200.20.4:22-10.200.16.10:44066.service: Deactivated successfully. Sep 4 23:45:17.664068 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 23:45:17.664813 systemd-logind[1697]: Session 6 logged out. Waiting for processes to exit. Sep 4 23:45:17.665732 systemd-logind[1697]: Removed session 6. Sep 4 23:45:17.751452 systemd[1]: Started sshd@4-10.200.20.4:22-10.200.16.10:44076.service - OpenSSH per-connection server daemon (10.200.16.10:44076). Sep 4 23:45:18.244049 sshd[2207]: Accepted publickey for core from 10.200.16.10 port 44076 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:45:18.245453 sshd-session[2207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:18.249819 systemd-logind[1697]: New session 7 of user core. Sep 4 23:45:18.260383 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 23:45:18.653473 sudo[2210]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 23:45:18.653767 sudo[2210]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:45:18.686331 sudo[2210]: pam_unix(sudo:session): session closed for user root Sep 4 23:45:18.765964 sshd[2209]: Connection closed by 10.200.16.10 port 44076 Sep 4 23:45:18.766755 sshd-session[2207]: pam_unix(sshd:session): session closed for user core Sep 4 23:45:18.770636 systemd[1]: sshd@4-10.200.20.4:22-10.200.16.10:44076.service: Deactivated successfully. Sep 4 23:45:18.772207 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 23:45:18.772918 systemd-logind[1697]: Session 7 logged out. Waiting for processes to exit. Sep 4 23:45:18.774032 systemd-logind[1697]: Removed session 7. Sep 4 23:45:18.854840 systemd[1]: Started sshd@5-10.200.20.4:22-10.200.16.10:44084.service - OpenSSH per-connection server daemon (10.200.16.10:44084). Sep 4 23:45:19.352017 sshd[2216]: Accepted publickey for core from 10.200.16.10 port 44084 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:45:19.353440 sshd-session[2216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:19.359178 systemd-logind[1697]: New session 8 of user core. Sep 4 23:45:19.364459 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 23:45:19.628059 sudo[2220]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 23:45:19.628518 sudo[2220]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:45:19.632178 sudo[2220]: pam_unix(sudo:session): session closed for user root Sep 4 23:45:19.637773 sudo[2219]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 4 23:45:19.638048 sudo[2219]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:45:19.660810 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 23:45:19.683794 augenrules[2242]: No rules Sep 4 23:45:19.685283 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 23:45:19.685483 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 23:45:19.686930 sudo[2219]: pam_unix(sudo:session): session closed for user root Sep 4 23:45:19.758054 sshd[2218]: Connection closed by 10.200.16.10 port 44084 Sep 4 23:45:19.758758 sshd-session[2216]: pam_unix(sshd:session): session closed for user core Sep 4 23:45:19.761463 systemd[1]: sshd@5-10.200.20.4:22-10.200.16.10:44084.service: Deactivated successfully. Sep 4 23:45:19.763166 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 23:45:19.764679 systemd-logind[1697]: Session 8 logged out. Waiting for processes to exit. Sep 4 23:45:19.765958 systemd-logind[1697]: Removed session 8. Sep 4 23:45:19.850449 systemd[1]: Started sshd@6-10.200.20.4:22-10.200.16.10:34878.service - OpenSSH per-connection server daemon (10.200.16.10:34878). Sep 4 23:45:20.307036 sshd[2251]: Accepted publickey for core from 10.200.16.10 port 34878 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:45:20.308363 sshd-session[2251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:20.312554 systemd-logind[1697]: New session 9 of user core. Sep 4 23:45:20.320378 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 23:45:20.565501 sudo[2254]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 23:45:20.565794 sudo[2254]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:45:22.455254 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 23:45:22.462505 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:22.490683 (dockerd)[2275]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 23:45:22.491730 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 23:45:22.584121 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:22.593518 (kubelet)[2281]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:45:22.645669 kubelet[2281]: E0904 23:45:22.645594 2281 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:45:22.648592 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:45:22.648877 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:45:22.651275 systemd[1]: kubelet.service: Consumed 149ms CPU time, 107.2M memory peak. Sep 4 23:45:23.884295 chronyd[1688]: Selected source PHC0 Sep 4 23:45:23.899504 dockerd[2275]: time="2025-09-04T23:45:23.899445825Z" level=info msg="Starting up" Sep 4 23:45:24.371178 dockerd[2275]: time="2025-09-04T23:45:24.371132109Z" level=info msg="Loading containers: start." Sep 4 23:45:24.634346 kernel: Initializing XFRM netlink socket Sep 4 23:45:24.911804 systemd-networkd[1609]: docker0: Link UP Sep 4 23:45:24.947419 dockerd[2275]: time="2025-09-04T23:45:24.947373531Z" level=info msg="Loading containers: done." Sep 4 23:45:24.961042 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1270540127-merged.mount: Deactivated successfully. Sep 4 23:45:24.970660 dockerd[2275]: time="2025-09-04T23:45:24.970616397Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 23:45:24.970757 dockerd[2275]: time="2025-09-04T23:45:24.970729597Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 4 23:45:24.970882 dockerd[2275]: time="2025-09-04T23:45:24.970852717Z" level=info msg="Daemon has completed initialization" Sep 4 23:45:25.026815 dockerd[2275]: time="2025-09-04T23:45:25.026740125Z" level=info msg="API listen on /run/docker.sock" Sep 4 23:45:25.026931 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 23:45:25.861839 containerd[1714]: time="2025-09-04T23:45:25.861567410Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 4 23:45:26.754145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2271094521.mount: Deactivated successfully. Sep 4 23:45:27.941445 containerd[1714]: time="2025-09-04T23:45:27.941346152Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:27.945975 containerd[1714]: time="2025-09-04T23:45:27.945731790Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=27352613" Sep 4 23:45:27.951371 containerd[1714]: time="2025-09-04T23:45:27.951333707Z" level=info msg="ImageCreate event name:\"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:27.957591 containerd[1714]: time="2025-09-04T23:45:27.957117663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:27.958328 containerd[1714]: time="2025-09-04T23:45:27.958286303Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"27349413\" in 2.096673653s" Sep 4 23:45:27.958328 containerd[1714]: time="2025-09-04T23:45:27.958328262Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\"" Sep 4 23:45:27.960337 containerd[1714]: time="2025-09-04T23:45:27.960297341Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 4 23:45:29.228691 containerd[1714]: time="2025-09-04T23:45:29.228632159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:29.232077 containerd[1714]: time="2025-09-04T23:45:29.231842637Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=23536977" Sep 4 23:45:29.235458 containerd[1714]: time="2025-09-04T23:45:29.235398915Z" level=info msg="ImageCreate event name:\"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:29.240942 containerd[1714]: time="2025-09-04T23:45:29.240854952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:29.242359 containerd[1714]: time="2025-09-04T23:45:29.242173711Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"25093155\" in 1.28183221s" Sep 4 23:45:29.242359 containerd[1714]: time="2025-09-04T23:45:29.242241311Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\"" Sep 4 23:45:29.242923 containerd[1714]: time="2025-09-04T23:45:29.242874150Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 4 23:45:30.281233 containerd[1714]: time="2025-09-04T23:45:30.281013623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:30.287274 containerd[1714]: time="2025-09-04T23:45:30.287214659Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=18292014" Sep 4 23:45:30.293078 containerd[1714]: time="2025-09-04T23:45:30.293038736Z" level=info msg="ImageCreate event name:\"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:30.299626 containerd[1714]: time="2025-09-04T23:45:30.299550732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:30.300929 containerd[1714]: time="2025-09-04T23:45:30.300760051Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"19848210\" in 1.057732741s" Sep 4 23:45:30.300929 containerd[1714]: time="2025-09-04T23:45:30.300802531Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\"" Sep 4 23:45:30.301354 containerd[1714]: time="2025-09-04T23:45:30.301251051Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 4 23:45:31.505255 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2198868989.mount: Deactivated successfully. Sep 4 23:45:31.863801 containerd[1714]: time="2025-09-04T23:45:31.863660016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:31.866516 containerd[1714]: time="2025-09-04T23:45:31.866458895Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=28199959" Sep 4 23:45:31.870399 containerd[1714]: time="2025-09-04T23:45:31.870346532Z" level=info msg="ImageCreate event name:\"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:31.875012 containerd[1714]: time="2025-09-04T23:45:31.874827210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:31.875993 containerd[1714]: time="2025-09-04T23:45:31.875472529Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"28198978\" in 1.574186318s" Sep 4 23:45:31.875993 containerd[1714]: time="2025-09-04T23:45:31.875506129Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\"" Sep 4 23:45:31.875993 containerd[1714]: time="2025-09-04T23:45:31.875969529Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 4 23:45:32.661338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2087384975.mount: Deactivated successfully. Sep 4 23:45:32.662509 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 4 23:45:32.668411 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:32.776327 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:32.776741 (kubelet)[2553]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:45:32.895226 kubelet[2553]: E0904 23:45:32.894874 2553 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:45:32.897223 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:45:32.897371 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:45:32.897921 systemd[1]: kubelet.service: Consumed 128ms CPU time, 109.1M memory peak. Sep 4 23:45:35.105617 containerd[1714]: time="2025-09-04T23:45:35.105559450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:35.108297 containerd[1714]: time="2025-09-04T23:45:35.108219408Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Sep 4 23:45:35.112203 containerd[1714]: time="2025-09-04T23:45:35.112141285Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:35.118322 containerd[1714]: time="2025-09-04T23:45:35.118264641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:35.119620 containerd[1714]: time="2025-09-04T23:45:35.119480720Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 3.243485231s" Sep 4 23:45:35.119620 containerd[1714]: time="2025-09-04T23:45:35.119519400Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 4 23:45:35.120237 containerd[1714]: time="2025-09-04T23:45:35.120180520Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 4 23:45:35.763394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2770381987.mount: Deactivated successfully. Sep 4 23:45:35.797234 containerd[1714]: time="2025-09-04T23:45:35.797014858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:35.800940 containerd[1714]: time="2025-09-04T23:45:35.800745936Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 4 23:45:35.804026 containerd[1714]: time="2025-09-04T23:45:35.803974694Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:35.809108 containerd[1714]: time="2025-09-04T23:45:35.809039890Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:35.810114 containerd[1714]: time="2025-09-04T23:45:35.810075049Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 689.26701ms" Sep 4 23:45:35.810156 containerd[1714]: time="2025-09-04T23:45:35.810117329Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 4 23:45:35.810741 containerd[1714]: time="2025-09-04T23:45:35.810710249Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 4 23:45:36.526570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount539622608.mount: Deactivated successfully. Sep 4 23:45:39.639515 containerd[1714]: time="2025-09-04T23:45:39.639447519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:39.642762 containerd[1714]: time="2025-09-04T23:45:39.642689797Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465295" Sep 4 23:45:39.646360 containerd[1714]: time="2025-09-04T23:45:39.646289235Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:39.651772 containerd[1714]: time="2025-09-04T23:45:39.651701231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:39.653642 containerd[1714]: time="2025-09-04T23:45:39.653596910Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 3.842850861s" Sep 4 23:45:39.653898 containerd[1714]: time="2025-09-04T23:45:39.653782590Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 4 23:45:41.232516 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Sep 4 23:45:42.955276 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 4 23:45:42.965652 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:43.075409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:43.080798 (kubelet)[2696]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:45:43.214235 kubelet[2696]: E0904 23:45:43.214087 2696 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:45:43.217592 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:45:43.217730 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:45:43.220276 systemd[1]: kubelet.service: Consumed 218ms CPU time, 107.4M memory peak. Sep 4 23:45:44.323211 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:44.323572 systemd[1]: kubelet.service: Consumed 218ms CPU time, 107.4M memory peak. Sep 4 23:45:44.335830 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:44.368010 systemd[1]: Reload requested from client PID 2710 ('systemctl') (unit session-9.scope)... Sep 4 23:45:44.368231 systemd[1]: Reloading... Sep 4 23:45:44.498265 zram_generator::config[2763]: No configuration found. Sep 4 23:45:44.604236 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:45:44.708851 systemd[1]: Reloading finished in 340 ms. Sep 4 23:45:44.887822 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 23:45:44.887924 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 23:45:44.888226 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:44.888278 systemd[1]: kubelet.service: Consumed 84ms CPU time, 93.9M memory peak. Sep 4 23:45:44.894505 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:45.006999 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:45.017509 (kubelet)[2823]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 23:45:45.055556 kubelet[2823]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:45:45.055556 kubelet[2823]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 23:45:45.055556 kubelet[2823]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:45:45.055917 kubelet[2823]: I0904 23:45:45.055605 2823 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 23:45:45.589273 update_engine[1699]: I20250904 23:45:45.589210 1699 update_attempter.cc:509] Updating boot flags... Sep 4 23:45:45.688240 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2844) Sep 4 23:45:46.203408 kubelet[2823]: I0904 23:45:46.203357 2823 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 4 23:45:46.203408 kubelet[2823]: I0904 23:45:46.203391 2823 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 23:45:46.203878 kubelet[2823]: I0904 23:45:46.203608 2823 server.go:956] "Client rotation is on, will bootstrap in background" Sep 4 23:45:46.224105 kubelet[2823]: E0904 23:45:46.224047 2823 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.4:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 4 23:45:46.226961 kubelet[2823]: I0904 23:45:46.226919 2823 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 23:45:46.236390 kubelet[2823]: E0904 23:45:46.236337 2823 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 4 23:45:46.236390 kubelet[2823]: I0904 23:45:46.236391 2823 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 4 23:45:46.239710 kubelet[2823]: I0904 23:45:46.239680 2823 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 23:45:46.239943 kubelet[2823]: I0904 23:45:46.239910 2823 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 23:45:46.240098 kubelet[2823]: I0904 23:45:46.239939 2823 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.2-n-a8c1fd94a3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 23:45:46.240220 kubelet[2823]: I0904 23:45:46.240107 2823 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 23:45:46.240220 kubelet[2823]: I0904 23:45:46.240117 2823 container_manager_linux.go:303] "Creating device plugin manager" Sep 4 23:45:46.240290 kubelet[2823]: I0904 23:45:46.240271 2823 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:45:46.243365 kubelet[2823]: I0904 23:45:46.243342 2823 kubelet.go:480] "Attempting to sync node with API server" Sep 4 23:45:46.243420 kubelet[2823]: I0904 23:45:46.243369 2823 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 23:45:46.243420 kubelet[2823]: I0904 23:45:46.243400 2823 kubelet.go:386] "Adding apiserver pod source" Sep 4 23:45:46.243420 kubelet[2823]: I0904 23:45:46.243415 2823 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 23:45:46.248145 kubelet[2823]: I0904 23:45:46.248112 2823 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 4 23:45:46.249225 kubelet[2823]: I0904 23:45:46.248738 2823 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 4 23:45:46.249225 kubelet[2823]: W0904 23:45:46.248811 2823 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 23:45:46.252274 kubelet[2823]: I0904 23:45:46.251781 2823 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 23:45:46.252274 kubelet[2823]: I0904 23:45:46.251829 2823 server.go:1289] "Started kubelet" Sep 4 23:45:46.252274 kubelet[2823]: E0904 23:45:46.252023 2823 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 4 23:45:46.252274 kubelet[2823]: E0904 23:45:46.252240 2823 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-n-a8c1fd94a3&limit=500&resourceVersion=0\": dial tcp 10.200.20.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 4 23:45:46.253612 kubelet[2823]: I0904 23:45:46.253575 2823 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 23:45:46.253837 kubelet[2823]: I0904 23:45:46.253801 2823 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 23:45:46.254772 kubelet[2823]: I0904 23:45:46.254750 2823 server.go:317] "Adding debug handlers to kubelet server" Sep 4 23:45:46.257528 kubelet[2823]: I0904 23:45:46.257492 2823 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 23:45:46.260212 kubelet[2823]: I0904 23:45:46.259126 2823 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 23:45:46.260212 kubelet[2823]: I0904 23:45:46.259533 2823 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 23:45:46.261346 kubelet[2823]: I0904 23:45:46.260944 2823 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 23:45:46.261780 kubelet[2823]: E0904 23:45:46.261740 2823 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-a8c1fd94a3\" not found" Sep 4 23:45:46.262899 kubelet[2823]: E0904 23:45:46.262868 2823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-n-a8c1fd94a3?timeout=10s\": dial tcp 10.200.20.4:6443: connect: connection refused" interval="200ms" Sep 4 23:45:46.266235 kubelet[2823]: E0904 23:45:46.264991 2823 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.4:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.4:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.2.2-n-a8c1fd94a3.186239102c344e8f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.2-n-a8c1fd94a3,UID:ci-4230.2.2-n-a8c1fd94a3,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.2.2-n-a8c1fd94a3,},FirstTimestamp:2025-09-04 23:45:46.251800207 +0000 UTC m=+1.231058134,LastTimestamp:2025-09-04 23:45:46.251800207 +0000 UTC m=+1.231058134,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.2-n-a8c1fd94a3,}" Sep 4 23:45:46.268569 kubelet[2823]: I0904 23:45:46.268551 2823 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 23:45:46.268731 kubelet[2823]: I0904 23:45:46.268720 2823 reconciler.go:26] "Reconciler: start to sync state" Sep 4 23:45:46.269021 kubelet[2823]: I0904 23:45:46.268973 2823 factory.go:223] Registration of the systemd container factory successfully Sep 4 23:45:46.269124 kubelet[2823]: I0904 23:45:46.269098 2823 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 23:45:46.269892 kubelet[2823]: E0904 23:45:46.269858 2823 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 4 23:45:46.271679 kubelet[2823]: E0904 23:45:46.271652 2823 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 23:45:46.272999 kubelet[2823]: I0904 23:45:46.272977 2823 factory.go:223] Registration of the containerd container factory successfully Sep 4 23:45:46.298911 kubelet[2823]: I0904 23:45:46.298880 2823 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 23:45:46.298911 kubelet[2823]: I0904 23:45:46.298901 2823 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 23:45:46.298911 kubelet[2823]: I0904 23:45:46.298921 2823 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:45:46.305689 kubelet[2823]: I0904 23:45:46.305655 2823 policy_none.go:49] "None policy: Start" Sep 4 23:45:46.305689 kubelet[2823]: I0904 23:45:46.305688 2823 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 23:45:46.305820 kubelet[2823]: I0904 23:45:46.305710 2823 state_mem.go:35] "Initializing new in-memory state store" Sep 4 23:45:46.316111 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 23:45:46.320045 kubelet[2823]: I0904 23:45:46.319985 2823 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 4 23:45:46.321011 kubelet[2823]: I0904 23:45:46.320984 2823 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 4 23:45:46.321052 kubelet[2823]: I0904 23:45:46.321014 2823 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 4 23:45:46.321052 kubelet[2823]: I0904 23:45:46.321047 2823 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 23:45:46.321093 kubelet[2823]: I0904 23:45:46.321054 2823 kubelet.go:2436] "Starting kubelet main sync loop" Sep 4 23:45:46.321122 kubelet[2823]: E0904 23:45:46.321091 2823 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 23:45:46.324989 kubelet[2823]: E0904 23:45:46.324899 2823 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 4 23:45:46.330677 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 23:45:46.334418 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 23:45:46.346162 kubelet[2823]: E0904 23:45:46.346002 2823 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 4 23:45:46.346311 kubelet[2823]: I0904 23:45:46.346264 2823 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 23:45:46.346311 kubelet[2823]: I0904 23:45:46.346279 2823 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 23:45:46.346706 kubelet[2823]: I0904 23:45:46.346642 2823 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 23:45:46.349253 kubelet[2823]: E0904 23:45:46.348745 2823 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 23:45:46.349253 kubelet[2823]: E0904 23:45:46.348792 2823 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.2.2-n-a8c1fd94a3\" not found" Sep 4 23:45:46.448001 kubelet[2823]: I0904 23:45:46.447956 2823 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:45:46.448491 kubelet[2823]: E0904 23:45:46.448353 2823 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.4:6443/api/v1/nodes\": dial tcp 10.200.20.4:6443: connect: connection refused" node="ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:45:46.452951 systemd[1]: Created slice kubepods-burstable-podc811c3e3a42e28585df5387e2b43f2eb.slice - libcontainer container kubepods-burstable-podc811c3e3a42e28585df5387e2b43f2eb.slice. Sep 4 23:45:46.464024 kubelet[2823]: E0904 23:45:46.463985 2823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-n-a8c1fd94a3?timeout=10s\": dial tcp 10.200.20.4:6443: connect: connection refused" interval="400ms" Sep 4 23:45:46.465531 kubelet[2823]: E0904 23:45:46.465505 2823 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-a8c1fd94a3\" not found" node="ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:45:46.469799 kubelet[2823]: I0904 23:45:46.469752 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/692b95e7aa6452dbd4cc1437ab863739-kubeconfig\") pod \"kube-scheduler-ci-4230.2.2-n-a8c1fd94a3\" (UID: \"692b95e7aa6452dbd4cc1437ab863739\") " pod="kube-system/kube-scheduler-ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:45:46.469799 kubelet[2823]: I0904 23:45:46.469789 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c811c3e3a42e28585df5387e2b43f2eb-k8s-certs\") pod \"kube-apiserver-ci-4230.2.2-n-a8c1fd94a3\" (UID: \"c811c3e3a42e28585df5387e2b43f2eb\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:45:46.470120 kubelet[2823]: I0904 23:45:46.469809 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c811c3e3a42e28585df5387e2b43f2eb-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.2-n-a8c1fd94a3\" (UID: \"c811c3e3a42e28585df5387e2b43f2eb\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:45:46.470120 kubelet[2823]: I0904 23:45:46.469826 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d9c050210332449f290f10c6edab0a2-ca-certs\") pod \"kube-controller-manager-ci-4230.2.2-n-a8c1fd94a3\" (UID: \"0d9c050210332449f290f10c6edab0a2\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:45:46.470120 kubelet[2823]: I0904 23:45:46.469844 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0d9c050210332449f290f10c6edab0a2-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.2-n-a8c1fd94a3\" (UID: \"0d9c050210332449f290f10c6edab0a2\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:45:46.470120 kubelet[2823]: I0904 23:45:46.469858 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c811c3e3a42e28585df5387e2b43f2eb-ca-certs\") pod \"kube-apiserver-ci-4230.2.2-n-a8c1fd94a3\" (UID: \"c811c3e3a42e28585df5387e2b43f2eb\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:45:46.470120 kubelet[2823]: I0904 23:45:46.469872 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d9c050210332449f290f10c6edab0a2-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.2-n-a8c1fd94a3\" (UID: \"0d9c050210332449f290f10c6edab0a2\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:45:46.470035 systemd[1]: Created slice kubepods-burstable-pod0d9c050210332449f290f10c6edab0a2.slice - libcontainer container kubepods-burstable-pod0d9c050210332449f290f10c6edab0a2.slice. Sep 4 23:45:46.470317 kubelet[2823]: I0904 23:45:46.469887 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0d9c050210332449f290f10c6edab0a2-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.2-n-a8c1fd94a3\" (UID: \"0d9c050210332449f290f10c6edab0a2\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:45:46.470317 kubelet[2823]: I0904 23:45:46.469923 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d9c050210332449f290f10c6edab0a2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.2-n-a8c1fd94a3\" (UID: \"0d9c050210332449f290f10c6edab0a2\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:45:46.472391 kubelet[2823]: E0904 23:45:46.472362 2823 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-a8c1fd94a3\" not found" node="ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:45:46.484709 systemd[1]: Created slice kubepods-burstable-pod692b95e7aa6452dbd4cc1437ab863739.slice - libcontainer container kubepods-burstable-pod692b95e7aa6452dbd4cc1437ab863739.slice. Sep 4 23:45:46.486604 kubelet[2823]: E0904 23:45:46.486571 2823 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-a8c1fd94a3\" not found" node="ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:45:46.651072 kubelet[2823]: I0904 23:45:46.651039 2823 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:45:46.651438 kubelet[2823]: E0904 23:45:46.651406 2823 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.4:6443/api/v1/nodes\": dial tcp 10.200.20.4:6443: connect: connection refused" node="ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:45:46.767567 containerd[1714]: time="2025-09-04T23:45:46.767244774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.2-n-a8c1fd94a3,Uid:c811c3e3a42e28585df5387e2b43f2eb,Namespace:kube-system,Attempt:0,}" Sep 4 23:45:46.774487 containerd[1714]: time="2025-09-04T23:45:46.774153130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.2-n-a8c1fd94a3,Uid:0d9c050210332449f290f10c6edab0a2,Namespace:kube-system,Attempt:0,}" Sep 4 23:45:46.787431 containerd[1714]: time="2025-09-04T23:45:46.787387882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.2-n-a8c1fd94a3,Uid:692b95e7aa6452dbd4cc1437ab863739,Namespace:kube-system,Attempt:0,}" Sep 4 23:45:46.865336 kubelet[2823]: E0904 23:45:46.865271 2823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-n-a8c1fd94a3?timeout=10s\": dial tcp 10.200.20.4:6443: connect: connection refused" interval="800ms" Sep 4 23:45:47.054028 kubelet[2823]: I0904 23:45:47.053582 2823 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:45:47.054028 kubelet[2823]: E0904 23:45:47.053926 2823 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.4:6443/api/v1/nodes\": dial tcp 10.200.20.4:6443: connect: connection refused" node="ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:45:47.078140 kubelet[2823]: E0904 23:45:47.078089 2823 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 4 23:45:47.186537 kubelet[2823]: E0904 23:45:47.186496 2823 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 4 23:45:47.286998 kubelet[2823]: E0904 23:45:47.286948 2823 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-n-a8c1fd94a3&limit=500&resourceVersion=0\": dial tcp 10.200.20.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 4 23:45:47.427252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4055167415.mount: Deactivated successfully. Sep 4 23:45:47.457883 containerd[1714]: time="2025-09-04T23:45:47.457822915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:45:47.468205 containerd[1714]: time="2025-09-04T23:45:47.468057309Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Sep 4 23:45:47.473214 containerd[1714]: time="2025-09-04T23:45:47.472935666Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:45:47.476440 containerd[1714]: time="2025-09-04T23:45:47.476259464Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:45:47.495566 kubelet[2823]: E0904 23:45:47.495527 2823 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 4 23:45:47.497489 containerd[1714]: time="2025-09-04T23:45:47.497290491Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 23:45:47.502023 containerd[1714]: time="2025-09-04T23:45:47.501294729Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:45:47.504724 containerd[1714]: time="2025-09-04T23:45:47.504690407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:45:47.505386 containerd[1714]: time="2025-09-04T23:45:47.505348966Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 738.028672ms" Sep 4 23:45:47.508043 containerd[1714]: time="2025-09-04T23:45:47.508001725Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 23:45:47.508943 containerd[1714]: time="2025-09-04T23:45:47.508919844Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 721.449202ms" Sep 4 23:45:47.511052 containerd[1714]: time="2025-09-04T23:45:47.511019443Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 736.756193ms" Sep 4 23:45:47.666667 kubelet[2823]: E0904 23:45:47.666623 2823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-n-a8c1fd94a3?timeout=10s\": dial tcp 10.200.20.4:6443: connect: connection refused" interval="1.6s" Sep 4 23:45:47.856015 kubelet[2823]: I0904 23:45:47.855934 2823 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:45:47.856719 kubelet[2823]: E0904 23:45:47.856664 2823 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.4:6443/api/v1/nodes\": dial tcp 10.200.20.4:6443: connect: connection refused" node="ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:45:48.333762 kubelet[2823]: E0904 23:45:48.333712 2823 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.4:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 4 23:45:48.980338 kubelet[2823]: E0904 23:45:48.980280 2823 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-n-a8c1fd94a3&limit=500&resourceVersion=0\": dial tcp 10.200.20.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 4 23:45:49.267238 kubelet[2823]: E0904 23:45:49.267108 2823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-n-a8c1fd94a3?timeout=10s\": dial tcp 10.200.20.4:6443: connect: connection refused" interval="3.2s" Sep 4 23:45:49.458731 kubelet[2823]: I0904 23:45:49.458691 2823 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:45:49.459082 kubelet[2823]: E0904 23:45:49.459018 2823 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.4:6443/api/v1/nodes\": dial tcp 10.200.20.4:6443: connect: connection refused" node="ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:45:49.551283 kubelet[2823]: E0904 23:45:49.551165 2823 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 4 23:45:49.555829 kubelet[2823]: E0904 23:45:49.555790 2823 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 4 23:45:49.974161 kubelet[2823]: E0904 23:45:49.974125 2823 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 4 23:45:51.276401 containerd[1714]: time="2025-09-04T23:45:51.267082952Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:45:51.276401 containerd[1714]: time="2025-09-04T23:45:51.267658592Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:45:51.276401 containerd[1714]: time="2025-09-04T23:45:51.267676952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:51.276401 containerd[1714]: time="2025-09-04T23:45:51.268257192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:51.280004 containerd[1714]: time="2025-09-04T23:45:51.278691111Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:45:51.280004 containerd[1714]: time="2025-09-04T23:45:51.278753951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:45:51.280004 containerd[1714]: time="2025-09-04T23:45:51.278770591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:51.280004 containerd[1714]: time="2025-09-04T23:45:51.278845871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:51.285942 containerd[1714]: time="2025-09-04T23:45:51.285207591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:45:51.285942 containerd[1714]: time="2025-09-04T23:45:51.285774231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:45:51.285942 containerd[1714]: time="2025-09-04T23:45:51.285787911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:51.286125 containerd[1714]: time="2025-09-04T23:45:51.285977711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:51.327319 kubelet[2823]: E0904 23:45:51.327178 2823 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.4:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.4:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.2.2-n-a8c1fd94a3.186239102c344e8f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.2-n-a8c1fd94a3,UID:ci-4230.2.2-n-a8c1fd94a3,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.2.2-n-a8c1fd94a3,},FirstTimestamp:2025-09-04 23:45:46.251800207 +0000 UTC m=+1.231058134,LastTimestamp:2025-09-04 23:45:46.251800207 +0000 UTC m=+1.231058134,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.2-n-a8c1fd94a3,}" Sep 4 23:45:51.355421 systemd[1]: Started cri-containerd-89cb177488c1ce83913873f38357fd392bbf0d4654ac5898cff9c8acc5371975.scope - libcontainer container 89cb177488c1ce83913873f38357fd392bbf0d4654ac5898cff9c8acc5371975. Sep 4 23:45:51.364623 systemd[1]: Started cri-containerd-e51f5b711c0c1cf1f11607610a2ba8d7729f6565905628385cb478aafe70089f.scope - libcontainer container e51f5b711c0c1cf1f11607610a2ba8d7729f6565905628385cb478aafe70089f. Sep 4 23:45:51.367736 systemd[1]: Started cri-containerd-efbd0fe7d853265927c1107046a3830e4303107b662079e33a06cda95fc70dc4.scope - libcontainer container efbd0fe7d853265927c1107046a3830e4303107b662079e33a06cda95fc70dc4. Sep 4 23:45:51.417168 containerd[1714]: time="2025-09-04T23:45:51.416883578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.2-n-a8c1fd94a3,Uid:692b95e7aa6452dbd4cc1437ab863739,Namespace:kube-system,Attempt:0,} returns sandbox id \"e51f5b711c0c1cf1f11607610a2ba8d7729f6565905628385cb478aafe70089f\"" Sep 4 23:45:51.435769 containerd[1714]: time="2025-09-04T23:45:51.435376536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.2-n-a8c1fd94a3,Uid:c811c3e3a42e28585df5387e2b43f2eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"89cb177488c1ce83913873f38357fd392bbf0d4654ac5898cff9c8acc5371975\"" Sep 4 23:45:51.440609 containerd[1714]: time="2025-09-04T23:45:51.440541776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.2-n-a8c1fd94a3,Uid:0d9c050210332449f290f10c6edab0a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"efbd0fe7d853265927c1107046a3830e4303107b662079e33a06cda95fc70dc4\"" Sep 4 23:45:51.575135 containerd[1714]: time="2025-09-04T23:45:51.574850203Z" level=info msg="CreateContainer within sandbox \"e51f5b711c0c1cf1f11607610a2ba8d7729f6565905628385cb478aafe70089f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 23:45:51.582085 containerd[1714]: time="2025-09-04T23:45:51.581919762Z" level=info msg="CreateContainer within sandbox \"89cb177488c1ce83913873f38357fd392bbf0d4654ac5898cff9c8acc5371975\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 23:45:51.587240 containerd[1714]: time="2025-09-04T23:45:51.587162361Z" level=info msg="CreateContainer within sandbox \"efbd0fe7d853265927c1107046a3830e4303107b662079e33a06cda95fc70dc4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 23:45:51.663427 containerd[1714]: time="2025-09-04T23:45:51.663368234Z" level=info msg="CreateContainer within sandbox \"e51f5b711c0c1cf1f11607610a2ba8d7729f6565905628385cb478aafe70089f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ef0dfd0debe8692f528d0980d7bc53b78cdf206a93867abef63082d3ae8bbd73\"" Sep 4 23:45:51.664252 containerd[1714]: time="2025-09-04T23:45:51.664214954Z" level=info msg="StartContainer for \"ef0dfd0debe8692f528d0980d7bc53b78cdf206a93867abef63082d3ae8bbd73\"" Sep 4 23:45:51.670228 containerd[1714]: time="2025-09-04T23:45:51.669427314Z" level=info msg="CreateContainer within sandbox \"89cb177488c1ce83913873f38357fd392bbf0d4654ac5898cff9c8acc5371975\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8c4b5f4d320c6de8c41b182e0873abf44a3e248e28c0e18a36e31f52122c8038\"" Sep 4 23:45:51.670740 containerd[1714]: time="2025-09-04T23:45:51.670695593Z" level=info msg="StartContainer for \"8c4b5f4d320c6de8c41b182e0873abf44a3e248e28c0e18a36e31f52122c8038\"" Sep 4 23:45:51.672473 containerd[1714]: time="2025-09-04T23:45:51.672408393Z" level=info msg="CreateContainer within sandbox \"efbd0fe7d853265927c1107046a3830e4303107b662079e33a06cda95fc70dc4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"47a66ba02f2c79b3db485ab16786b4edd9e0e3b21d42ed7233e3a0704e5a535a\"" Sep 4 23:45:51.674396 containerd[1714]: time="2025-09-04T23:45:51.674352593Z" level=info msg="StartContainer for \"47a66ba02f2c79b3db485ab16786b4edd9e0e3b21d42ed7233e3a0704e5a535a\"" Sep 4 23:45:51.702994 systemd[1]: Started cri-containerd-ef0dfd0debe8692f528d0980d7bc53b78cdf206a93867abef63082d3ae8bbd73.scope - libcontainer container ef0dfd0debe8692f528d0980d7bc53b78cdf206a93867abef63082d3ae8bbd73. Sep 4 23:45:51.722423 systemd[1]: Started cri-containerd-8c4b5f4d320c6de8c41b182e0873abf44a3e248e28c0e18a36e31f52122c8038.scope - libcontainer container 8c4b5f4d320c6de8c41b182e0873abf44a3e248e28c0e18a36e31f52122c8038. Sep 4 23:45:51.727825 systemd[1]: Started cri-containerd-47a66ba02f2c79b3db485ab16786b4edd9e0e3b21d42ed7233e3a0704e5a535a.scope - libcontainer container 47a66ba02f2c79b3db485ab16786b4edd9e0e3b21d42ed7233e3a0704e5a535a. Sep 4 23:45:52.662708 kubelet[2823]: I0904 23:45:52.662666 2823 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:45:52.791283 containerd[1714]: time="2025-09-04T23:45:52.790804183Z" level=info msg="StartContainer for \"ef0dfd0debe8692f528d0980d7bc53b78cdf206a93867abef63082d3ae8bbd73\" returns successfully" Sep 4 23:45:52.791283 containerd[1714]: time="2025-09-04T23:45:52.790858583Z" level=info msg="StartContainer for \"47a66ba02f2c79b3db485ab16786b4edd9e0e3b21d42ed7233e3a0704e5a535a\" returns successfully" Sep 4 23:45:52.791283 containerd[1714]: time="2025-09-04T23:45:52.791266543Z" level=info msg="StartContainer for \"8c4b5f4d320c6de8c41b182e0873abf44a3e248e28c0e18a36e31f52122c8038\" returns successfully" Sep 4 23:45:52.803387 kubelet[2823]: E0904 23:45:52.801953 2823 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-a8c1fd94a3\" not found" node="ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:45:52.807361 kubelet[2823]: E0904 23:45:52.807284 2823 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-a8c1fd94a3\" not found" node="ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:45:53.811772 kubelet[2823]: E0904 23:45:53.811000 2823 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-a8c1fd94a3\" not found" node="ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:45:53.811772 kubelet[2823]: E0904 23:45:53.811360 2823 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-a8c1fd94a3\" not found" node="ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:45:53.811772 kubelet[2823]: E0904 23:45:53.811598 2823 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-a8c1fd94a3\" not found" node="ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:45:53.974693 kubelet[2823]: E0904 23:45:53.974647 2823 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.2.2-n-a8c1fd94a3\" not found" node="ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:45:54.194880 kubelet[2823]: I0904 23:45:54.194345 2823 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:45:54.194880 kubelet[2823]: E0904 23:45:54.194385 2823 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4230.2.2-n-a8c1fd94a3\": node \"ci-4230.2.2-n-a8c1fd94a3\" not found" Sep 4 23:45:54.213668 kubelet[2823]: E0904 23:45:54.213632 2823 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-a8c1fd94a3\" not found" Sep 4 23:45:54.362547 kubelet[2823]: I0904 23:45:54.362119 2823 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:45:54.474500 kubelet[2823]: E0904 23:45:54.473798 2823 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.2.2-n-a8c1fd94a3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:45:54.474500 kubelet[2823]: I0904 23:45:54.473834 2823 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:45:54.477614 kubelet[2823]: E0904 23:45:54.477397 2823 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.2-n-a8c1fd94a3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:45:54.477614 kubelet[2823]: I0904 23:45:54.477432 2823 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:45:54.483073 kubelet[2823]: E0904 23:45:54.483029 2823 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.2-n-a8c1fd94a3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:45:54.809912 kubelet[2823]: I0904 23:45:54.809286 2823 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:45:54.852260 kubelet[2823]: I0904 23:45:54.852229 2823 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 4 23:45:55.248912 kubelet[2823]: I0904 23:45:55.248640 2823 apiserver.go:52] "Watching apiserver" Sep 4 23:45:55.269791 kubelet[2823]: I0904 23:45:55.269748 2823 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 23:45:56.361287 kubelet[2823]: I0904 23:45:56.360450 2823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.2.2-n-a8c1fd94a3" podStartSLOduration=2.360415044 podStartE2EDuration="2.360415044s" podCreationTimestamp="2025-09-04 23:45:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:45:56.360401604 +0000 UTC m=+11.339659531" watchObservedRunningTime="2025-09-04 23:45:56.360415044 +0000 UTC m=+11.339672971" Sep 4 23:45:57.247374 systemd[1]: Reload requested from client PID 3173 ('systemctl') (unit session-9.scope)... Sep 4 23:45:57.247390 systemd[1]: Reloading... Sep 4 23:45:57.351222 zram_generator::config[3221]: No configuration found. Sep 4 23:45:57.458742 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:45:57.574012 systemd[1]: Reloading finished in 326 ms. Sep 4 23:45:57.598601 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:57.616966 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 23:45:57.617256 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:57.617316 systemd[1]: kubelet.service: Consumed 1.576s CPU time, 127M memory peak. Sep 4 23:45:57.623965 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:57.747123 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:57.758567 (kubelet)[3284]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 23:46:01.332275 kubelet[3284]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:46:01.332275 kubelet[3284]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 23:46:01.332275 kubelet[3284]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:46:01.332275 kubelet[3284]: I0904 23:45:57.855262 3284 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 23:46:01.332275 kubelet[3284]: I0904 23:45:57.862243 3284 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 4 23:46:01.332275 kubelet[3284]: I0904 23:45:57.862288 3284 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 23:46:01.332275 kubelet[3284]: I0904 23:45:57.862642 3284 server.go:956] "Client rotation is on, will bootstrap in background" Sep 4 23:46:01.334473 kubelet[3284]: I0904 23:46:01.334439 3284 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 4 23:46:01.337123 kubelet[3284]: I0904 23:46:01.336898 3284 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 23:46:01.344490 kubelet[3284]: E0904 23:46:01.344145 3284 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 4 23:46:01.344490 kubelet[3284]: I0904 23:46:01.344202 3284 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 4 23:46:01.348902 kubelet[3284]: I0904 23:46:01.348859 3284 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 23:46:01.349122 kubelet[3284]: I0904 23:46:01.349066 3284 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 23:46:01.349293 kubelet[3284]: I0904 23:46:01.349104 3284 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.2-n-a8c1fd94a3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 23:46:01.349413 kubelet[3284]: I0904 23:46:01.349301 3284 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 23:46:01.349413 kubelet[3284]: I0904 23:46:01.349311 3284 container_manager_linux.go:303] "Creating device plugin manager" Sep 4 23:46:01.349413 kubelet[3284]: I0904 23:46:01.349361 3284 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:46:01.349525 kubelet[3284]: I0904 23:46:01.349508 3284 kubelet.go:480] "Attempting to sync node with API server" Sep 4 23:46:01.349566 kubelet[3284]: I0904 23:46:01.349528 3284 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 23:46:01.350088 kubelet[3284]: I0904 23:46:01.349549 3284 kubelet.go:386] "Adding apiserver pod source" Sep 4 23:46:01.350147 kubelet[3284]: I0904 23:46:01.350101 3284 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 23:46:01.354082 kubelet[3284]: I0904 23:46:01.354034 3284 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 4 23:46:01.354668 kubelet[3284]: I0904 23:46:01.354641 3284 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 4 23:46:01.363209 kubelet[3284]: I0904 23:46:01.361416 3284 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 23:46:01.363450 kubelet[3284]: I0904 23:46:01.363304 3284 server.go:1289] "Started kubelet" Sep 4 23:46:01.365672 kubelet[3284]: I0904 23:46:01.365551 3284 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 23:46:01.366454 kubelet[3284]: I0904 23:46:01.366262 3284 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 23:46:01.367318 kubelet[3284]: I0904 23:46:01.366925 3284 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 23:46:01.367318 kubelet[3284]: I0904 23:46:01.367208 3284 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 23:46:01.373260 kubelet[3284]: I0904 23:46:01.373166 3284 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 23:46:01.375804 kubelet[3284]: I0904 23:46:01.375774 3284 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 23:46:01.376399 kubelet[3284]: E0904 23:46:01.376263 3284 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-a8c1fd94a3\" not found" Sep 4 23:46:01.378067 kubelet[3284]: I0904 23:46:01.377170 3284 server.go:317] "Adding debug handlers to kubelet server" Sep 4 23:46:01.379513 kubelet[3284]: I0904 23:46:01.378765 3284 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 23:46:01.392804 kubelet[3284]: I0904 23:46:01.379890 3284 reconciler.go:26] "Reconciler: start to sync state" Sep 4 23:46:01.397252 kubelet[3284]: I0904 23:46:01.396763 3284 factory.go:223] Registration of the systemd container factory successfully Sep 4 23:46:01.397252 kubelet[3284]: I0904 23:46:01.396869 3284 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 23:46:01.398621 kubelet[3284]: I0904 23:46:01.398598 3284 factory.go:223] Registration of the containerd container factory successfully Sep 4 23:46:01.406078 kubelet[3284]: I0904 23:46:01.406037 3284 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 4 23:46:01.408017 kubelet[3284]: I0904 23:46:01.407986 3284 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 4 23:46:01.408268 kubelet[3284]: I0904 23:46:01.408158 3284 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 4 23:46:01.408268 kubelet[3284]: I0904 23:46:01.408203 3284 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 23:46:01.408268 kubelet[3284]: I0904 23:46:01.408211 3284 kubelet.go:2436] "Starting kubelet main sync loop" Sep 4 23:46:01.408590 kubelet[3284]: E0904 23:46:01.408381 3284 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 23:46:01.455785 kubelet[3284]: I0904 23:46:01.455755 3284 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 23:46:01.455785 kubelet[3284]: I0904 23:46:01.455774 3284 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 23:46:01.455785 kubelet[3284]: I0904 23:46:01.455797 3284 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:46:01.455958 kubelet[3284]: I0904 23:46:01.455940 3284 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 23:46:01.455982 kubelet[3284]: I0904 23:46:01.455949 3284 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 23:46:01.455982 kubelet[3284]: I0904 23:46:01.455969 3284 policy_none.go:49] "None policy: Start" Sep 4 23:46:01.456019 kubelet[3284]: I0904 23:46:01.455985 3284 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 23:46:01.456019 kubelet[3284]: I0904 23:46:01.455993 3284 state_mem.go:35] "Initializing new in-memory state store" Sep 4 23:46:01.456096 kubelet[3284]: I0904 23:46:01.456074 3284 state_mem.go:75] "Updated machine memory state" Sep 4 23:46:01.460594 kubelet[3284]: E0904 23:46:01.460193 3284 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 4 23:46:01.460594 kubelet[3284]: I0904 23:46:01.460383 3284 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 23:46:01.460594 kubelet[3284]: I0904 23:46:01.460400 3284 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 23:46:01.460858 kubelet[3284]: I0904 23:46:01.460846 3284 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 23:46:01.461885 kubelet[3284]: E0904 23:46:01.461865 3284 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 23:46:01.509738 kubelet[3284]: I0904 23:46:01.509679 3284 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:46:01.510445 kubelet[3284]: I0904 23:46:01.509679 3284 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:46:01.510445 kubelet[3284]: I0904 23:46:01.510023 3284 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:46:01.519001 kubelet[3284]: I0904 23:46:01.518959 3284 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 4 23:46:01.524272 kubelet[3284]: I0904 23:46:01.524003 3284 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 4 23:46:01.525038 kubelet[3284]: I0904 23:46:01.524911 3284 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 4 23:46:01.525038 kubelet[3284]: E0904 23:46:01.524967 3284 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.2-n-a8c1fd94a3\" already exists" pod="kube-system/kube-apiserver-ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:46:01.566979 kubelet[3284]: I0904 23:46:01.566923 3284 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:46:01.583421 kubelet[3284]: I0904 23:46:01.582216 3284 kubelet_node_status.go:124] "Node was previously registered" node="ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:46:01.583421 kubelet[3284]: I0904 23:46:01.582319 3284 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:46:01.607067 sudo[3319]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 4 23:46:01.607429 sudo[3319]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 4 23:46:01.693684 kubelet[3284]: I0904 23:46:01.693635 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0d9c050210332449f290f10c6edab0a2-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.2-n-a8c1fd94a3\" (UID: \"0d9c050210332449f290f10c6edab0a2\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:46:01.693684 kubelet[3284]: I0904 23:46:01.693686 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0d9c050210332449f290f10c6edab0a2-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.2-n-a8c1fd94a3\" (UID: \"0d9c050210332449f290f10c6edab0a2\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:46:01.693850 kubelet[3284]: I0904 23:46:01.693710 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/692b95e7aa6452dbd4cc1437ab863739-kubeconfig\") pod \"kube-scheduler-ci-4230.2.2-n-a8c1fd94a3\" (UID: \"692b95e7aa6452dbd4cc1437ab863739\") " pod="kube-system/kube-scheduler-ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:46:01.693850 kubelet[3284]: I0904 23:46:01.693728 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c811c3e3a42e28585df5387e2b43f2eb-k8s-certs\") pod \"kube-apiserver-ci-4230.2.2-n-a8c1fd94a3\" (UID: \"c811c3e3a42e28585df5387e2b43f2eb\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:46:01.693850 kubelet[3284]: I0904 23:46:01.693745 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d9c050210332449f290f10c6edab0a2-ca-certs\") pod \"kube-controller-manager-ci-4230.2.2-n-a8c1fd94a3\" (UID: \"0d9c050210332449f290f10c6edab0a2\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:46:01.693850 kubelet[3284]: I0904 23:46:01.693759 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d9c050210332449f290f10c6edab0a2-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.2-n-a8c1fd94a3\" (UID: \"0d9c050210332449f290f10c6edab0a2\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:46:01.693850 kubelet[3284]: I0904 23:46:01.693776 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d9c050210332449f290f10c6edab0a2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.2-n-a8c1fd94a3\" (UID: \"0d9c050210332449f290f10c6edab0a2\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:46:01.693958 kubelet[3284]: I0904 23:46:01.693792 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c811c3e3a42e28585df5387e2b43f2eb-ca-certs\") pod \"kube-apiserver-ci-4230.2.2-n-a8c1fd94a3\" (UID: \"c811c3e3a42e28585df5387e2b43f2eb\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:46:01.693958 kubelet[3284]: I0904 23:46:01.693808 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c811c3e3a42e28585df5387e2b43f2eb-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.2-n-a8c1fd94a3\" (UID: \"c811c3e3a42e28585df5387e2b43f2eb\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:46:02.060065 sudo[3319]: pam_unix(sudo:session): session closed for user root Sep 4 23:46:02.353007 kubelet[3284]: I0904 23:46:02.351566 3284 apiserver.go:52] "Watching apiserver" Sep 4 23:46:02.393304 kubelet[3284]: I0904 23:46:02.393244 3284 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 23:46:02.438616 kubelet[3284]: I0904 23:46:02.438102 3284 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:46:02.440175 kubelet[3284]: I0904 23:46:02.440152 3284 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:46:02.469134 kubelet[3284]: I0904 23:46:02.469022 3284 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 4 23:46:02.469556 kubelet[3284]: E0904 23:46:02.469326 3284 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.2-n-a8c1fd94a3\" already exists" pod="kube-system/kube-scheduler-ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:46:02.469809 kubelet[3284]: I0904 23:46:02.469783 3284 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 4 23:46:02.470038 kubelet[3284]: E0904 23:46:02.469912 3284 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.2-n-a8c1fd94a3\" already exists" pod="kube-system/kube-apiserver-ci-4230.2.2-n-a8c1fd94a3" Sep 4 23:46:02.486832 kubelet[3284]: I0904 23:46:02.486634 3284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.2.2-n-a8c1fd94a3" podStartSLOduration=1.486615378 podStartE2EDuration="1.486615378s" podCreationTimestamp="2025-09-04 23:46:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:46:02.485975339 +0000 UTC m=+4.724146560" watchObservedRunningTime="2025-09-04 23:46:02.486615378 +0000 UTC m=+4.724786599" Sep 4 23:46:02.516671 kubelet[3284]: I0904 23:46:02.516461 3284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.2.2-n-a8c1fd94a3" podStartSLOduration=1.516440082 podStartE2EDuration="1.516440082s" podCreationTimestamp="2025-09-04 23:46:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:46:02.498625332 +0000 UTC m=+4.736796553" watchObservedRunningTime="2025-09-04 23:46:02.516440082 +0000 UTC m=+4.754611303" Sep 4 23:46:03.576724 kubelet[3284]: I0904 23:46:03.576254 3284 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 23:46:03.577121 containerd[1714]: time="2025-09-04T23:46:03.576639049Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 23:46:03.578546 kubelet[3284]: I0904 23:46:03.577461 3284 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 23:46:03.667165 sudo[2254]: pam_unix(sudo:session): session closed for user root Sep 4 23:46:03.762530 sshd[2253]: Connection closed by 10.200.16.10 port 34878 Sep 4 23:46:03.763130 sshd-session[2251]: pam_unix(sshd:session): session closed for user core Sep 4 23:46:03.768256 systemd-logind[1697]: Session 9 logged out. Waiting for processes to exit. Sep 4 23:46:03.768697 systemd[1]: sshd@6-10.200.20.4:22-10.200.16.10:34878.service: Deactivated successfully. Sep 4 23:46:03.772158 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 23:46:03.772501 systemd[1]: session-9.scope: Consumed 6.283s CPU time, 263M memory peak. Sep 4 23:46:03.775038 systemd-logind[1697]: Removed session 9. Sep 4 23:46:04.610468 kubelet[3284]: I0904 23:46:04.610207 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aba7399c-bf0e-4046-9641-f26fc9358db7-kube-proxy\") pod \"kube-proxy-f2xzb\" (UID: \"aba7399c-bf0e-4046-9641-f26fc9358db7\") " pod="kube-system/kube-proxy-f2xzb" Sep 4 23:46:04.611378 systemd[1]: Created slice kubepods-besteffort-podaba7399c_bf0e_4046_9641_f26fc9358db7.slice - libcontainer container kubepods-besteffort-podaba7399c_bf0e_4046_9641_f26fc9358db7.slice. Sep 4 23:46:04.613443 kubelet[3284]: I0904 23:46:04.611483 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aba7399c-bf0e-4046-9641-f26fc9358db7-xtables-lock\") pod \"kube-proxy-f2xzb\" (UID: \"aba7399c-bf0e-4046-9641-f26fc9358db7\") " pod="kube-system/kube-proxy-f2xzb" Sep 4 23:46:04.613443 kubelet[3284]: I0904 23:46:04.611515 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aba7399c-bf0e-4046-9641-f26fc9358db7-lib-modules\") pod \"kube-proxy-f2xzb\" (UID: \"aba7399c-bf0e-4046-9641-f26fc9358db7\") " pod="kube-system/kube-proxy-f2xzb" Sep 4 23:46:04.613443 kubelet[3284]: I0904 23:46:04.611530 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkdc7\" (UniqueName: \"kubernetes.io/projected/aba7399c-bf0e-4046-9641-f26fc9358db7-kube-api-access-bkdc7\") pod \"kube-proxy-f2xzb\" (UID: \"aba7399c-bf0e-4046-9641-f26fc9358db7\") " pod="kube-system/kube-proxy-f2xzb" Sep 4 23:46:04.623659 systemd[1]: Created slice kubepods-burstable-pod9bd97563_dcf2_4b9c_bef3_cdce5f215b9f.slice - libcontainer container kubepods-burstable-pod9bd97563_dcf2_4b9c_bef3_cdce5f215b9f.slice. Sep 4 23:46:04.790945 systemd[1]: Created slice kubepods-besteffort-pod6a71ac55_f645_43f5_871a_0903af937eb0.slice - libcontainer container kubepods-besteffort-pod6a71ac55_f645_43f5_871a_0903af937eb0.slice. Sep 4 23:46:04.813177 kubelet[3284]: I0904 23:46:04.813112 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-host-proc-sys-net\") pod \"cilium-7j95n\" (UID: \"9bd97563-dcf2-4b9c-bef3-cdce5f215b9f\") " pod="kube-system/cilium-7j95n" Sep 4 23:46:04.813177 kubelet[3284]: I0904 23:46:04.813178 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-cilium-run\") pod \"cilium-7j95n\" (UID: \"9bd97563-dcf2-4b9c-bef3-cdce5f215b9f\") " pod="kube-system/cilium-7j95n" Sep 4 23:46:04.813177 kubelet[3284]: I0904 23:46:04.813208 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-bpf-maps\") pod \"cilium-7j95n\" (UID: \"9bd97563-dcf2-4b9c-bef3-cdce5f215b9f\") " pod="kube-system/cilium-7j95n" Sep 4 23:46:04.813177 kubelet[3284]: I0904 23:46:04.813224 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-hostproc\") pod \"cilium-7j95n\" (UID: \"9bd97563-dcf2-4b9c-bef3-cdce5f215b9f\") " pod="kube-system/cilium-7j95n" Sep 4 23:46:04.813177 kubelet[3284]: I0904 23:46:04.813239 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-cilium-cgroup\") pod \"cilium-7j95n\" (UID: \"9bd97563-dcf2-4b9c-bef3-cdce5f215b9f\") " pod="kube-system/cilium-7j95n" Sep 4 23:46:04.813177 kubelet[3284]: I0904 23:46:04.813252 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-lib-modules\") pod \"cilium-7j95n\" (UID: \"9bd97563-dcf2-4b9c-bef3-cdce5f215b9f\") " pod="kube-system/cilium-7j95n" Sep 4 23:46:04.813840 kubelet[3284]: I0904 23:46:04.813266 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-xtables-lock\") pod \"cilium-7j95n\" (UID: \"9bd97563-dcf2-4b9c-bef3-cdce5f215b9f\") " pod="kube-system/cilium-7j95n" Sep 4 23:46:04.813840 kubelet[3284]: I0904 23:46:04.813281 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-etc-cni-netd\") pod \"cilium-7j95n\" (UID: \"9bd97563-dcf2-4b9c-bef3-cdce5f215b9f\") " pod="kube-system/cilium-7j95n" Sep 4 23:46:04.813840 kubelet[3284]: I0904 23:46:04.813295 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-clustermesh-secrets\") pod \"cilium-7j95n\" (UID: \"9bd97563-dcf2-4b9c-bef3-cdce5f215b9f\") " pod="kube-system/cilium-7j95n" Sep 4 23:46:04.813840 kubelet[3284]: I0904 23:46:04.813312 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-host-proc-sys-kernel\") pod \"cilium-7j95n\" (UID: \"9bd97563-dcf2-4b9c-bef3-cdce5f215b9f\") " pod="kube-system/cilium-7j95n" Sep 4 23:46:04.813840 kubelet[3284]: I0904 23:46:04.813328 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-hubble-tls\") pod \"cilium-7j95n\" (UID: \"9bd97563-dcf2-4b9c-bef3-cdce5f215b9f\") " pod="kube-system/cilium-7j95n" Sep 4 23:46:04.813943 kubelet[3284]: I0904 23:46:04.813342 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsp5h\" (UniqueName: \"kubernetes.io/projected/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-kube-api-access-fsp5h\") pod \"cilium-7j95n\" (UID: \"9bd97563-dcf2-4b9c-bef3-cdce5f215b9f\") " pod="kube-system/cilium-7j95n" Sep 4 23:46:04.813943 kubelet[3284]: I0904 23:46:04.813363 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-cni-path\") pod \"cilium-7j95n\" (UID: \"9bd97563-dcf2-4b9c-bef3-cdce5f215b9f\") " pod="kube-system/cilium-7j95n" Sep 4 23:46:04.813943 kubelet[3284]: I0904 23:46:04.813389 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-cilium-config-path\") pod \"cilium-7j95n\" (UID: \"9bd97563-dcf2-4b9c-bef3-cdce5f215b9f\") " pod="kube-system/cilium-7j95n" Sep 4 23:46:04.914548 kubelet[3284]: I0904 23:46:04.914205 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6a71ac55-f645-43f5-871a-0903af937eb0-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-thkwd\" (UID: \"6a71ac55-f645-43f5-871a-0903af937eb0\") " pod="kube-system/cilium-operator-6c4d7847fc-thkwd" Sep 4 23:46:04.914548 kubelet[3284]: I0904 23:46:04.914260 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdbpc\" (UniqueName: \"kubernetes.io/projected/6a71ac55-f645-43f5-871a-0903af937eb0-kube-api-access-qdbpc\") pod \"cilium-operator-6c4d7847fc-thkwd\" (UID: \"6a71ac55-f645-43f5-871a-0903af937eb0\") " pod="kube-system/cilium-operator-6c4d7847fc-thkwd" Sep 4 23:46:04.925403 containerd[1714]: time="2025-09-04T23:46:04.924978335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f2xzb,Uid:aba7399c-bf0e-4046-9641-f26fc9358db7,Namespace:kube-system,Attempt:0,}" Sep 4 23:46:04.983262 containerd[1714]: time="2025-09-04T23:46:04.983046383Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:46:04.983262 containerd[1714]: time="2025-09-04T23:46:04.983105823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:46:04.983262 containerd[1714]: time="2025-09-04T23:46:04.983121383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:04.983630 containerd[1714]: time="2025-09-04T23:46:04.983282942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:05.001415 systemd[1]: Started cri-containerd-1810c64f53534a1b2dac214188e7c57fdf6076d959a98e3e808d5523243111d6.scope - libcontainer container 1810c64f53534a1b2dac214188e7c57fdf6076d959a98e3e808d5523243111d6. Sep 4 23:46:05.029574 containerd[1714]: time="2025-09-04T23:46:05.029437237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f2xzb,Uid:aba7399c-bf0e-4046-9641-f26fc9358db7,Namespace:kube-system,Attempt:0,} returns sandbox id \"1810c64f53534a1b2dac214188e7c57fdf6076d959a98e3e808d5523243111d6\"" Sep 4 23:46:05.050728 containerd[1714]: time="2025-09-04T23:46:05.050681025Z" level=info msg="CreateContainer within sandbox \"1810c64f53534a1b2dac214188e7c57fdf6076d959a98e3e808d5523243111d6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 23:46:05.094832 containerd[1714]: time="2025-09-04T23:46:05.094789360Z" level=info msg="CreateContainer within sandbox \"1810c64f53534a1b2dac214188e7c57fdf6076d959a98e3e808d5523243111d6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0a3d67a2a77e35600fa93f093353a8e4cc8552195ddbc4916a20c1cc3e280b9d\"" Sep 4 23:46:05.096882 containerd[1714]: time="2025-09-04T23:46:05.096773719Z" level=info msg="StartContainer for \"0a3d67a2a77e35600fa93f093353a8e4cc8552195ddbc4916a20c1cc3e280b9d\"" Sep 4 23:46:05.097084 containerd[1714]: time="2025-09-04T23:46:05.096790879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-thkwd,Uid:6a71ac55-f645-43f5-871a-0903af937eb0,Namespace:kube-system,Attempt:0,}" Sep 4 23:46:05.122414 systemd[1]: Started cri-containerd-0a3d67a2a77e35600fa93f093353a8e4cc8552195ddbc4916a20c1cc3e280b9d.scope - libcontainer container 0a3d67a2a77e35600fa93f093353a8e4cc8552195ddbc4916a20c1cc3e280b9d. Sep 4 23:46:05.149600 containerd[1714]: time="2025-09-04T23:46:05.149399050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:46:05.149600 containerd[1714]: time="2025-09-04T23:46:05.149460410Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:46:05.149600 containerd[1714]: time="2025-09-04T23:46:05.149472210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:05.149600 containerd[1714]: time="2025-09-04T23:46:05.149556569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:05.169975 containerd[1714]: time="2025-09-04T23:46:05.169554118Z" level=info msg="StartContainer for \"0a3d67a2a77e35600fa93f093353a8e4cc8552195ddbc4916a20c1cc3e280b9d\" returns successfully" Sep 4 23:46:05.172454 systemd[1]: Started cri-containerd-5c8c0c3804de8bff1775e8497bc5a61916431e795b2f4ea6ddf8277b48c458a3.scope - libcontainer container 5c8c0c3804de8bff1775e8497bc5a61916431e795b2f4ea6ddf8277b48c458a3. Sep 4 23:46:05.217496 containerd[1714]: time="2025-09-04T23:46:05.217445092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-thkwd,Uid:6a71ac55-f645-43f5-871a-0903af937eb0,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c8c0c3804de8bff1775e8497bc5a61916431e795b2f4ea6ddf8277b48c458a3\"" Sep 4 23:46:05.220504 containerd[1714]: time="2025-09-04T23:46:05.220463970Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 4 23:46:05.228538 containerd[1714]: time="2025-09-04T23:46:05.228100766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7j95n,Uid:9bd97563-dcf2-4b9c-bef3-cdce5f215b9f,Namespace:kube-system,Attempt:0,}" Sep 4 23:46:05.411332 containerd[1714]: time="2025-09-04T23:46:05.410929463Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:46:05.411332 containerd[1714]: time="2025-09-04T23:46:05.410992383Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:46:05.411332 containerd[1714]: time="2025-09-04T23:46:05.411008143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:05.412118 containerd[1714]: time="2025-09-04T23:46:05.411504983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:05.431450 systemd[1]: Started cri-containerd-e3fa3d36d42f959bd22ce4d9a7012f955252203e9a3d7665b792c0993dd54464.scope - libcontainer container e3fa3d36d42f959bd22ce4d9a7012f955252203e9a3d7665b792c0993dd54464. Sep 4 23:46:05.466764 containerd[1714]: time="2025-09-04T23:46:05.466561632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7j95n,Uid:9bd97563-dcf2-4b9c-bef3-cdce5f215b9f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3fa3d36d42f959bd22ce4d9a7012f955252203e9a3d7665b792c0993dd54464\"" Sep 4 23:46:05.493143 kubelet[3284]: I0904 23:46:05.492611 3284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f2xzb" podStartSLOduration=1.492589298 podStartE2EDuration="1.492589298s" podCreationTimestamp="2025-09-04 23:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:46:05.476304187 +0000 UTC m=+7.714475408" watchObservedRunningTime="2025-09-04 23:46:05.492589298 +0000 UTC m=+7.730760519" Sep 4 23:46:08.080787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount479385569.mount: Deactivated successfully. Sep 4 23:46:08.865463 containerd[1714]: time="2025-09-04T23:46:08.865403892Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:08.868358 containerd[1714]: time="2025-09-04T23:46:08.868158491Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 4 23:46:08.871571 containerd[1714]: time="2025-09-04T23:46:08.871499689Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:08.873083 containerd[1714]: time="2025-09-04T23:46:08.872895048Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.652376518s" Sep 4 23:46:08.873083 containerd[1714]: time="2025-09-04T23:46:08.872968728Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 4 23:46:08.874232 containerd[1714]: time="2025-09-04T23:46:08.874064887Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 4 23:46:08.885466 containerd[1714]: time="2025-09-04T23:46:08.885414921Z" level=info msg="CreateContainer within sandbox \"5c8c0c3804de8bff1775e8497bc5a61916431e795b2f4ea6ddf8277b48c458a3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 4 23:46:08.923976 containerd[1714]: time="2025-09-04T23:46:08.923930659Z" level=info msg="CreateContainer within sandbox \"5c8c0c3804de8bff1775e8497bc5a61916431e795b2f4ea6ddf8277b48c458a3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"55a330d6690f79fa5d40f336ae7cd002f3276918a994311b11028dcb49aebdf3\"" Sep 4 23:46:08.926527 containerd[1714]: time="2025-09-04T23:46:08.926319858Z" level=info msg="StartContainer for \"55a330d6690f79fa5d40f336ae7cd002f3276918a994311b11028dcb49aebdf3\"" Sep 4 23:46:08.953406 systemd[1]: Started cri-containerd-55a330d6690f79fa5d40f336ae7cd002f3276918a994311b11028dcb49aebdf3.scope - libcontainer container 55a330d6690f79fa5d40f336ae7cd002f3276918a994311b11028dcb49aebdf3. Sep 4 23:46:08.997157 containerd[1714]: time="2025-09-04T23:46:08.997102019Z" level=info msg="StartContainer for \"55a330d6690f79fa5d40f336ae7cd002f3276918a994311b11028dcb49aebdf3\" returns successfully" Sep 4 23:46:09.496218 kubelet[3284]: I0904 23:46:09.496122 3284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-thkwd" podStartSLOduration=1.841594383 podStartE2EDuration="5.49609478s" podCreationTimestamp="2025-09-04 23:46:04 +0000 UTC" firstStartedPulling="2025-09-04 23:46:05.21944773 +0000 UTC m=+7.457618951" lastFinishedPulling="2025-09-04 23:46:08.873948127 +0000 UTC m=+11.112119348" observedRunningTime="2025-09-04 23:46:09.494027541 +0000 UTC m=+11.732198762" watchObservedRunningTime="2025-09-04 23:46:09.49609478 +0000 UTC m=+11.734266001" Sep 4 23:46:13.007604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1445305125.mount: Deactivated successfully. Sep 4 23:46:31.781463 containerd[1714]: time="2025-09-04T23:46:31.781248869Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:31.784514 containerd[1714]: time="2025-09-04T23:46:31.784299547Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 4 23:46:31.788291 containerd[1714]: time="2025-09-04T23:46:31.787897345Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:31.789648 containerd[1714]: time="2025-09-04T23:46:31.789611464Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 22.915507737s" Sep 4 23:46:31.789763 containerd[1714]: time="2025-09-04T23:46:31.789748464Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 4 23:46:31.800038 containerd[1714]: time="2025-09-04T23:46:31.799991457Z" level=info msg="CreateContainer within sandbox \"e3fa3d36d42f959bd22ce4d9a7012f955252203e9a3d7665b792c0993dd54464\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 23:46:31.849429 containerd[1714]: time="2025-09-04T23:46:31.849304386Z" level=info msg="CreateContainer within sandbox \"e3fa3d36d42f959bd22ce4d9a7012f955252203e9a3d7665b792c0993dd54464\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2301495732ae72052663a94747424af43ac35ad746315edf4889e8cf7240355c\"" Sep 4 23:46:31.851550 containerd[1714]: time="2025-09-04T23:46:31.851489544Z" level=info msg="StartContainer for \"2301495732ae72052663a94747424af43ac35ad746315edf4889e8cf7240355c\"" Sep 4 23:46:31.881452 systemd[1]: Started cri-containerd-2301495732ae72052663a94747424af43ac35ad746315edf4889e8cf7240355c.scope - libcontainer container 2301495732ae72052663a94747424af43ac35ad746315edf4889e8cf7240355c. Sep 4 23:46:31.911226 containerd[1714]: time="2025-09-04T23:46:31.910959467Z" level=info msg="StartContainer for \"2301495732ae72052663a94747424af43ac35ad746315edf4889e8cf7240355c\" returns successfully" Sep 4 23:46:31.920030 systemd[1]: cri-containerd-2301495732ae72052663a94747424af43ac35ad746315edf4889e8cf7240355c.scope: Deactivated successfully. Sep 4 23:46:32.821231 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2301495732ae72052663a94747424af43ac35ad746315edf4889e8cf7240355c-rootfs.mount: Deactivated successfully. Sep 4 23:46:33.924613 containerd[1714]: time="2025-09-04T23:46:33.924524109Z" level=info msg="shim disconnected" id=2301495732ae72052663a94747424af43ac35ad746315edf4889e8cf7240355c namespace=k8s.io Sep 4 23:46:33.924613 containerd[1714]: time="2025-09-04T23:46:33.924606469Z" level=warning msg="cleaning up after shim disconnected" id=2301495732ae72052663a94747424af43ac35ad746315edf4889e8cf7240355c namespace=k8s.io Sep 4 23:46:33.925150 containerd[1714]: time="2025-09-04T23:46:33.924637829Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:46:33.936250 containerd[1714]: time="2025-09-04T23:46:33.936168021Z" level=warning msg="cleanup warnings time=\"2025-09-04T23:46:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 23:46:34.519368 containerd[1714]: time="2025-09-04T23:46:34.519296971Z" level=info msg="CreateContainer within sandbox \"e3fa3d36d42f959bd22ce4d9a7012f955252203e9a3d7665b792c0993dd54464\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 23:46:35.029122 containerd[1714]: time="2025-09-04T23:46:35.029066207Z" level=info msg="CreateContainer within sandbox \"e3fa3d36d42f959bd22ce4d9a7012f955252203e9a3d7665b792c0993dd54464\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"923334bf46c70c8fcfef56ab1e8ae5016c37f78aae850852403ea568231101dd\"" Sep 4 23:46:35.032226 containerd[1714]: time="2025-09-04T23:46:35.031178646Z" level=info msg="StartContainer for \"923334bf46c70c8fcfef56ab1e8ae5016c37f78aae850852403ea568231101dd\"" Sep 4 23:46:35.062509 systemd[1]: Started cri-containerd-923334bf46c70c8fcfef56ab1e8ae5016c37f78aae850852403ea568231101dd.scope - libcontainer container 923334bf46c70c8fcfef56ab1e8ae5016c37f78aae850852403ea568231101dd. Sep 4 23:46:35.091528 containerd[1714]: time="2025-09-04T23:46:35.091482448Z" level=info msg="StartContainer for \"923334bf46c70c8fcfef56ab1e8ae5016c37f78aae850852403ea568231101dd\" returns successfully" Sep 4 23:46:35.103263 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 23:46:35.103498 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:46:35.104086 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:46:35.112715 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:46:35.116512 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 4 23:46:35.117071 systemd[1]: cri-containerd-923334bf46c70c8fcfef56ab1e8ae5016c37f78aae850852403ea568231101dd.scope: Deactivated successfully. Sep 4 23:46:35.136293 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:46:35.641996 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-923334bf46c70c8fcfef56ab1e8ae5016c37f78aae850852403ea568231101dd-rootfs.mount: Deactivated successfully. Sep 4 23:46:36.384964 containerd[1714]: time="2025-09-04T23:46:36.384891653Z" level=info msg="shim disconnected" id=923334bf46c70c8fcfef56ab1e8ae5016c37f78aae850852403ea568231101dd namespace=k8s.io Sep 4 23:46:36.384964 containerd[1714]: time="2025-09-04T23:46:36.384957253Z" level=warning msg="cleaning up after shim disconnected" id=923334bf46c70c8fcfef56ab1e8ae5016c37f78aae850852403ea568231101dd namespace=k8s.io Sep 4 23:46:36.384964 containerd[1714]: time="2025-09-04T23:46:36.384966013Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:46:36.547762 containerd[1714]: time="2025-09-04T23:46:36.547722344Z" level=info msg="CreateContainer within sandbox \"e3fa3d36d42f959bd22ce4d9a7012f955252203e9a3d7665b792c0993dd54464\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 23:46:37.078443 containerd[1714]: time="2025-09-04T23:46:37.078345709Z" level=info msg="CreateContainer within sandbox \"e3fa3d36d42f959bd22ce4d9a7012f955252203e9a3d7665b792c0993dd54464\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"79c81ded93a1e06817f88611038a22b63919ce8fd0269e145ba0d61e9834fa94\"" Sep 4 23:46:37.079316 containerd[1714]: time="2025-09-04T23:46:37.078975389Z" level=info msg="StartContainer for \"79c81ded93a1e06817f88611038a22b63919ce8fd0269e145ba0d61e9834fa94\"" Sep 4 23:46:37.114421 systemd[1]: Started cri-containerd-79c81ded93a1e06817f88611038a22b63919ce8fd0269e145ba0d61e9834fa94.scope - libcontainer container 79c81ded93a1e06817f88611038a22b63919ce8fd0269e145ba0d61e9834fa94. Sep 4 23:46:37.145709 systemd[1]: cri-containerd-79c81ded93a1e06817f88611038a22b63919ce8fd0269e145ba0d61e9834fa94.scope: Deactivated successfully. Sep 4 23:46:37.148922 containerd[1714]: time="2025-09-04T23:46:37.148801782Z" level=info msg="StartContainer for \"79c81ded93a1e06817f88611038a22b63919ce8fd0269e145ba0d61e9834fa94\" returns successfully" Sep 4 23:46:37.169077 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79c81ded93a1e06817f88611038a22b63919ce8fd0269e145ba0d61e9834fa94-rootfs.mount: Deactivated successfully. Sep 4 23:46:38.184723 containerd[1714]: time="2025-09-04T23:46:38.184589329Z" level=info msg="shim disconnected" id=79c81ded93a1e06817f88611038a22b63919ce8fd0269e145ba0d61e9834fa94 namespace=k8s.io Sep 4 23:46:38.184723 containerd[1714]: time="2025-09-04T23:46:38.184646449Z" level=warning msg="cleaning up after shim disconnected" id=79c81ded93a1e06817f88611038a22b63919ce8fd0269e145ba0d61e9834fa94 namespace=k8s.io Sep 4 23:46:38.184723 containerd[1714]: time="2025-09-04T23:46:38.184655169Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:46:38.792385 containerd[1714]: time="2025-09-04T23:46:38.792340203Z" level=info msg="CreateContainer within sandbox \"e3fa3d36d42f959bd22ce4d9a7012f955252203e9a3d7665b792c0993dd54464\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 23:46:39.042758 containerd[1714]: time="2025-09-04T23:46:39.042648875Z" level=info msg="CreateContainer within sandbox \"e3fa3d36d42f959bd22ce4d9a7012f955252203e9a3d7665b792c0993dd54464\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"48f498a8f7ea50b3b74a8e248cd166358b7a6f3870c82fb2fddceea8bb35abdb\"" Sep 4 23:46:39.043613 containerd[1714]: time="2025-09-04T23:46:39.043576355Z" level=info msg="StartContainer for \"48f498a8f7ea50b3b74a8e248cd166358b7a6f3870c82fb2fddceea8bb35abdb\"" Sep 4 23:46:39.074415 systemd[1]: Started cri-containerd-48f498a8f7ea50b3b74a8e248cd166358b7a6f3870c82fb2fddceea8bb35abdb.scope - libcontainer container 48f498a8f7ea50b3b74a8e248cd166358b7a6f3870c82fb2fddceea8bb35abdb. Sep 4 23:46:39.097346 systemd[1]: cri-containerd-48f498a8f7ea50b3b74a8e248cd166358b7a6f3870c82fb2fddceea8bb35abdb.scope: Deactivated successfully. Sep 4 23:46:39.105382 containerd[1714]: time="2025-09-04T23:46:39.104995354Z" level=info msg="StartContainer for \"48f498a8f7ea50b3b74a8e248cd166358b7a6f3870c82fb2fddceea8bb35abdb\" returns successfully" Sep 4 23:46:39.121167 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-48f498a8f7ea50b3b74a8e248cd166358b7a6f3870c82fb2fddceea8bb35abdb-rootfs.mount: Deactivated successfully. Sep 4 23:46:40.124073 containerd[1714]: time="2025-09-04T23:46:40.123987072Z" level=info msg="shim disconnected" id=48f498a8f7ea50b3b74a8e248cd166358b7a6f3870c82fb2fddceea8bb35abdb namespace=k8s.io Sep 4 23:46:40.124073 containerd[1714]: time="2025-09-04T23:46:40.124041632Z" level=warning msg="cleaning up after shim disconnected" id=48f498a8f7ea50b3b74a8e248cd166358b7a6f3870c82fb2fddceea8bb35abdb namespace=k8s.io Sep 4 23:46:40.124073 containerd[1714]: time="2025-09-04T23:46:40.124049432Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:46:40.736056 containerd[1714]: time="2025-09-04T23:46:40.736006382Z" level=info msg="CreateContainer within sandbox \"e3fa3d36d42f959bd22ce4d9a7012f955252203e9a3d7665b792c0993dd54464\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 23:46:40.983096 containerd[1714]: time="2025-09-04T23:46:40.983043977Z" level=info msg="CreateContainer within sandbox \"e3fa3d36d42f959bd22ce4d9a7012f955252203e9a3d7665b792c0993dd54464\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7b0fd58a7396f35505bbe9c3bde94c769d75cf8e15ed077b56bccbec90b08774\"" Sep 4 23:46:40.984543 containerd[1714]: time="2025-09-04T23:46:40.983756777Z" level=info msg="StartContainer for \"7b0fd58a7396f35505bbe9c3bde94c769d75cf8e15ed077b56bccbec90b08774\"" Sep 4 23:46:41.021400 systemd[1]: Started cri-containerd-7b0fd58a7396f35505bbe9c3bde94c769d75cf8e15ed077b56bccbec90b08774.scope - libcontainer container 7b0fd58a7396f35505bbe9c3bde94c769d75cf8e15ed077b56bccbec90b08774. Sep 4 23:46:41.059854 containerd[1714]: time="2025-09-04T23:46:41.059707326Z" level=info msg="StartContainer for \"7b0fd58a7396f35505bbe9c3bde94c769d75cf8e15ed077b56bccbec90b08774\" returns successfully" Sep 4 23:46:41.203724 kubelet[3284]: I0904 23:46:41.203686 3284 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 4 23:46:41.469751 systemd[1]: Created slice kubepods-burstable-pod87a3bd6e_f029_4b35_95a4_fa4c0a14f540.slice - libcontainer container kubepods-burstable-pod87a3bd6e_f029_4b35_95a4_fa4c0a14f540.slice. Sep 4 23:46:41.493717 systemd[1]: Created slice kubepods-burstable-pod2a7a0f8a_5404_40b0_9d0f_7fa0af72f4d8.slice - libcontainer container kubepods-burstable-pod2a7a0f8a_5404_40b0_9d0f_7fa0af72f4d8.slice. Sep 4 23:46:41.556347 kubelet[3284]: I0904 23:46:41.556305 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs9vf\" (UniqueName: \"kubernetes.io/projected/87a3bd6e-f029-4b35-95a4-fa4c0a14f540-kube-api-access-vs9vf\") pod \"coredns-674b8bbfcf-rwh7v\" (UID: \"87a3bd6e-f029-4b35-95a4-fa4c0a14f540\") " pod="kube-system/coredns-674b8bbfcf-rwh7v" Sep 4 23:46:41.556545 kubelet[3284]: I0904 23:46:41.556528 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87a3bd6e-f029-4b35-95a4-fa4c0a14f540-config-volume\") pod \"coredns-674b8bbfcf-rwh7v\" (UID: \"87a3bd6e-f029-4b35-95a4-fa4c0a14f540\") " pod="kube-system/coredns-674b8bbfcf-rwh7v" Sep 4 23:46:41.657649 kubelet[3284]: I0904 23:46:41.657599 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a7a0f8a-5404-40b0-9d0f-7fa0af72f4d8-config-volume\") pod \"coredns-674b8bbfcf-45v4x\" (UID: \"2a7a0f8a-5404-40b0-9d0f-7fa0af72f4d8\") " pod="kube-system/coredns-674b8bbfcf-45v4x" Sep 4 23:46:41.657797 kubelet[3284]: I0904 23:46:41.657660 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz6q4\" (UniqueName: \"kubernetes.io/projected/2a7a0f8a-5404-40b0-9d0f-7fa0af72f4d8-kube-api-access-qz6q4\") pod \"coredns-674b8bbfcf-45v4x\" (UID: \"2a7a0f8a-5404-40b0-9d0f-7fa0af72f4d8\") " pod="kube-system/coredns-674b8bbfcf-45v4x" Sep 4 23:46:41.777738 containerd[1714]: time="2025-09-04T23:46:41.777328686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rwh7v,Uid:87a3bd6e-f029-4b35-95a4-fa4c0a14f540,Namespace:kube-system,Attempt:0,}" Sep 4 23:46:41.798127 containerd[1714]: time="2025-09-04T23:46:41.798049832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-45v4x,Uid:2a7a0f8a-5404-40b0-9d0f-7fa0af72f4d8,Namespace:kube-system,Attempt:0,}" Sep 4 23:46:43.604587 systemd-networkd[1609]: cilium_host: Link UP Sep 4 23:46:43.604962 systemd-networkd[1609]: cilium_net: Link UP Sep 4 23:46:43.604965 systemd-networkd[1609]: cilium_net: Gained carrier Sep 4 23:46:43.605466 systemd-networkd[1609]: cilium_host: Gained carrier Sep 4 23:46:43.662311 systemd-networkd[1609]: cilium_net: Gained IPv6LL Sep 4 23:46:43.806127 systemd-networkd[1609]: cilium_vxlan: Link UP Sep 4 23:46:43.806134 systemd-networkd[1609]: cilium_vxlan: Gained carrier Sep 4 23:46:43.831405 systemd-networkd[1609]: cilium_host: Gained IPv6LL Sep 4 23:46:44.438223 kernel: NET: Registered PF_ALG protocol family Sep 4 23:46:45.292882 systemd-networkd[1609]: lxc_health: Link UP Sep 4 23:46:45.310865 systemd-networkd[1609]: lxc_health: Gained carrier Sep 4 23:46:45.432323 systemd-networkd[1609]: cilium_vxlan: Gained IPv6LL Sep 4 23:46:45.561378 systemd-networkd[1609]: lxc410587559f47: Link UP Sep 4 23:46:45.576614 kernel: eth0: renamed from tmp38881 Sep 4 23:46:45.582449 systemd-networkd[1609]: lxc410587559f47: Gained carrier Sep 4 23:46:45.601642 systemd-networkd[1609]: lxc9ba6b5d33277: Link UP Sep 4 23:46:45.620236 kernel: eth0: renamed from tmp64798 Sep 4 23:46:45.626110 systemd-networkd[1609]: lxc9ba6b5d33277: Gained carrier Sep 4 23:46:46.457900 systemd-networkd[1609]: lxc_health: Gained IPv6LL Sep 4 23:46:47.031322 systemd-networkd[1609]: lxc9ba6b5d33277: Gained IPv6LL Sep 4 23:46:47.031610 systemd-networkd[1609]: lxc410587559f47: Gained IPv6LL Sep 4 23:46:47.256212 kubelet[3284]: I0904 23:46:47.255653 3284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7j95n" podStartSLOduration=16.933316685 podStartE2EDuration="43.255636797s" podCreationTimestamp="2025-09-04 23:46:04 +0000 UTC" firstStartedPulling="2025-09-04 23:46:05.468400911 +0000 UTC m=+7.706572132" lastFinishedPulling="2025-09-04 23:46:31.790721063 +0000 UTC m=+34.028892244" observedRunningTime="2025-09-04 23:46:41.559731911 +0000 UTC m=+43.797903132" watchObservedRunningTime="2025-09-04 23:46:47.255636797 +0000 UTC m=+49.493808018" Sep 4 23:46:49.502286 containerd[1714]: time="2025-09-04T23:46:49.502126202Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:46:49.502682 containerd[1714]: time="2025-09-04T23:46:49.502256322Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:46:49.502682 containerd[1714]: time="2025-09-04T23:46:49.502270202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:49.502810 containerd[1714]: time="2025-09-04T23:46:49.502770562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:49.538473 systemd[1]: Started cri-containerd-6479896e4d708da9ba78b598ee1dddf95bb56a2399fcaddb35ff8ea69c7f53a1.scope - libcontainer container 6479896e4d708da9ba78b598ee1dddf95bb56a2399fcaddb35ff8ea69c7f53a1. Sep 4 23:46:49.547980 containerd[1714]: time="2025-09-04T23:46:49.547852175Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:46:49.547980 containerd[1714]: time="2025-09-04T23:46:49.547926975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:46:49.550127 containerd[1714]: time="2025-09-04T23:46:49.550004934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:49.550301 containerd[1714]: time="2025-09-04T23:46:49.550215253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:49.580393 systemd[1]: Started cri-containerd-388811ea251f4f4686d476f3b672f85716e17e3a043cdb5a49f97645dca40015.scope - libcontainer container 388811ea251f4f4686d476f3b672f85716e17e3a043cdb5a49f97645dca40015. Sep 4 23:46:49.623658 containerd[1714]: time="2025-09-04T23:46:49.623621009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rwh7v,Uid:87a3bd6e-f029-4b35-95a4-fa4c0a14f540,Namespace:kube-system,Attempt:0,} returns sandbox id \"6479896e4d708da9ba78b598ee1dddf95bb56a2399fcaddb35ff8ea69c7f53a1\"" Sep 4 23:46:49.635442 containerd[1714]: time="2025-09-04T23:46:49.634603363Z" level=info msg="CreateContainer within sandbox \"6479896e4d708da9ba78b598ee1dddf95bb56a2399fcaddb35ff8ea69c7f53a1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 23:46:49.654504 containerd[1714]: time="2025-09-04T23:46:49.654463231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-45v4x,Uid:2a7a0f8a-5404-40b0-9d0f-7fa0af72f4d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"388811ea251f4f4686d476f3b672f85716e17e3a043cdb5a49f97645dca40015\"" Sep 4 23:46:49.669231 containerd[1714]: time="2025-09-04T23:46:49.667886263Z" level=info msg="CreateContainer within sandbox \"388811ea251f4f4686d476f3b672f85716e17e3a043cdb5a49f97645dca40015\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 23:46:49.679221 containerd[1714]: time="2025-09-04T23:46:49.679147896Z" level=info msg="CreateContainer within sandbox \"6479896e4d708da9ba78b598ee1dddf95bb56a2399fcaddb35ff8ea69c7f53a1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b08a6f6bf2eedce12a43082230ca54b4c2606a3ef13b0ed54ca39a6a4231568f\"" Sep 4 23:46:49.682005 containerd[1714]: time="2025-09-04T23:46:49.681901854Z" level=info msg="StartContainer for \"b08a6f6bf2eedce12a43082230ca54b4c2606a3ef13b0ed54ca39a6a4231568f\"" Sep 4 23:46:49.713369 systemd[1]: Started cri-containerd-b08a6f6bf2eedce12a43082230ca54b4c2606a3ef13b0ed54ca39a6a4231568f.scope - libcontainer container b08a6f6bf2eedce12a43082230ca54b4c2606a3ef13b0ed54ca39a6a4231568f. Sep 4 23:46:49.729663 containerd[1714]: time="2025-09-04T23:46:49.729499185Z" level=info msg="CreateContainer within sandbox \"388811ea251f4f4686d476f3b672f85716e17e3a043cdb5a49f97645dca40015\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"61c944e2148cd328dfccc30449cab3d91232523bd5e038f5fbaf9c917e05bb55\"" Sep 4 23:46:49.732258 containerd[1714]: time="2025-09-04T23:46:49.731581584Z" level=info msg="StartContainer for \"61c944e2148cd328dfccc30449cab3d91232523bd5e038f5fbaf9c917e05bb55\"" Sep 4 23:46:49.749995 containerd[1714]: time="2025-09-04T23:46:49.749949053Z" level=info msg="StartContainer for \"b08a6f6bf2eedce12a43082230ca54b4c2606a3ef13b0ed54ca39a6a4231568f\" returns successfully" Sep 4 23:46:49.781712 systemd[1]: Started cri-containerd-61c944e2148cd328dfccc30449cab3d91232523bd5e038f5fbaf9c917e05bb55.scope - libcontainer container 61c944e2148cd328dfccc30449cab3d91232523bd5e038f5fbaf9c917e05bb55. Sep 4 23:46:49.823584 containerd[1714]: time="2025-09-04T23:46:49.823530929Z" level=info msg="StartContainer for \"61c944e2148cd328dfccc30449cab3d91232523bd5e038f5fbaf9c917e05bb55\" returns successfully" Sep 4 23:46:50.581130 kubelet[3284]: I0904 23:46:50.580604 3284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-45v4x" podStartSLOduration=46.580583952 podStartE2EDuration="46.580583952s" podCreationTimestamp="2025-09-04 23:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:46:50.579037313 +0000 UTC m=+52.817208574" watchObservedRunningTime="2025-09-04 23:46:50.580583952 +0000 UTC m=+52.818755173" Sep 4 23:46:50.631341 kubelet[3284]: I0904 23:46:50.631137 3284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rwh7v" podStartSLOduration=46.631117642 podStartE2EDuration="46.631117642s" podCreationTimestamp="2025-09-04 23:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:46:50.603649898 +0000 UTC m=+52.841821159" watchObservedRunningTime="2025-09-04 23:46:50.631117642 +0000 UTC m=+52.869288863" Sep 4 23:48:06.636639 systemd[1]: Started sshd@7-10.200.20.4:22-10.200.16.10:43130.service - OpenSSH per-connection server daemon (10.200.16.10:43130). Sep 4 23:48:07.094808 sshd[4688]: Accepted publickey for core from 10.200.16.10 port 43130 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:07.097234 sshd-session[4688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:07.102131 systemd-logind[1697]: New session 10 of user core. Sep 4 23:48:07.108378 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 23:48:07.521103 sshd[4690]: Connection closed by 10.200.16.10 port 43130 Sep 4 23:48:07.521668 sshd-session[4688]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:07.525314 systemd[1]: sshd@7-10.200.20.4:22-10.200.16.10:43130.service: Deactivated successfully. Sep 4 23:48:07.527450 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 23:48:07.528253 systemd-logind[1697]: Session 10 logged out. Waiting for processes to exit. Sep 4 23:48:07.529297 systemd-logind[1697]: Removed session 10. Sep 4 23:48:12.605497 systemd[1]: Started sshd@8-10.200.20.4:22-10.200.16.10:52442.service - OpenSSH per-connection server daemon (10.200.16.10:52442). Sep 4 23:48:13.065680 sshd[4703]: Accepted publickey for core from 10.200.16.10 port 52442 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:13.067169 sshd-session[4703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:13.071217 systemd-logind[1697]: New session 11 of user core. Sep 4 23:48:13.075357 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 23:48:13.479576 sshd[4705]: Connection closed by 10.200.16.10 port 52442 Sep 4 23:48:13.480158 sshd-session[4703]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:13.483770 systemd[1]: sshd@8-10.200.20.4:22-10.200.16.10:52442.service: Deactivated successfully. Sep 4 23:48:13.486028 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 23:48:13.486921 systemd-logind[1697]: Session 11 logged out. Waiting for processes to exit. Sep 4 23:48:13.487815 systemd-logind[1697]: Removed session 11. Sep 4 23:48:18.570447 systemd[1]: Started sshd@9-10.200.20.4:22-10.200.16.10:52454.service - OpenSSH per-connection server daemon (10.200.16.10:52454). Sep 4 23:48:19.026159 sshd[4718]: Accepted publickey for core from 10.200.16.10 port 52454 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:19.027558 sshd-session[4718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:19.032615 systemd-logind[1697]: New session 12 of user core. Sep 4 23:48:19.040375 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 23:48:19.436459 sshd[4720]: Connection closed by 10.200.16.10 port 52454 Sep 4 23:48:19.436988 sshd-session[4718]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:19.440385 systemd[1]: sshd@9-10.200.20.4:22-10.200.16.10:52454.service: Deactivated successfully. Sep 4 23:48:19.442823 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 23:48:19.443731 systemd-logind[1697]: Session 12 logged out. Waiting for processes to exit. Sep 4 23:48:19.444883 systemd-logind[1697]: Removed session 12. Sep 4 23:48:24.533246 systemd[1]: Started sshd@10-10.200.20.4:22-10.200.16.10:37736.service - OpenSSH per-connection server daemon (10.200.16.10:37736). Sep 4 23:48:24.998233 sshd[4733]: Accepted publickey for core from 10.200.16.10 port 37736 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:24.999684 sshd-session[4733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:25.004503 systemd-logind[1697]: New session 13 of user core. Sep 4 23:48:25.014355 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 23:48:25.419322 sshd[4735]: Connection closed by 10.200.16.10 port 37736 Sep 4 23:48:25.420121 sshd-session[4733]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:25.423932 systemd[1]: sshd@10-10.200.20.4:22-10.200.16.10:37736.service: Deactivated successfully. Sep 4 23:48:25.426246 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 23:48:25.427577 systemd-logind[1697]: Session 13 logged out. Waiting for processes to exit. Sep 4 23:48:25.428573 systemd-logind[1697]: Removed session 13. Sep 4 23:48:25.502226 systemd[1]: Started sshd@11-10.200.20.4:22-10.200.16.10:37750.service - OpenSSH per-connection server daemon (10.200.16.10:37750). Sep 4 23:48:25.966949 sshd[4748]: Accepted publickey for core from 10.200.16.10 port 37750 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:25.968345 sshd-session[4748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:25.972511 systemd-logind[1697]: New session 14 of user core. Sep 4 23:48:25.980363 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 23:48:26.425094 sshd[4750]: Connection closed by 10.200.16.10 port 37750 Sep 4 23:48:26.424073 sshd-session[4748]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:26.428272 systemd[1]: sshd@11-10.200.20.4:22-10.200.16.10:37750.service: Deactivated successfully. Sep 4 23:48:26.430160 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 23:48:26.431087 systemd-logind[1697]: Session 14 logged out. Waiting for processes to exit. Sep 4 23:48:26.432323 systemd-logind[1697]: Removed session 14. Sep 4 23:48:26.508489 systemd[1]: Started sshd@12-10.200.20.4:22-10.200.16.10:37762.service - OpenSSH per-connection server daemon (10.200.16.10:37762). Sep 4 23:48:26.969934 sshd[4759]: Accepted publickey for core from 10.200.16.10 port 37762 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:26.971322 sshd-session[4759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:26.975964 systemd-logind[1697]: New session 15 of user core. Sep 4 23:48:26.987368 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 23:48:27.381370 sshd[4761]: Connection closed by 10.200.16.10 port 37762 Sep 4 23:48:27.383719 sshd-session[4759]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:27.388311 systemd-logind[1697]: Session 15 logged out. Waiting for processes to exit. Sep 4 23:48:27.388756 systemd[1]: sshd@12-10.200.20.4:22-10.200.16.10:37762.service: Deactivated successfully. Sep 4 23:48:27.392899 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 23:48:27.395994 systemd-logind[1697]: Removed session 15. Sep 4 23:48:32.469491 systemd[1]: Started sshd@13-10.200.20.4:22-10.200.16.10:55742.service - OpenSSH per-connection server daemon (10.200.16.10:55742). Sep 4 23:48:32.925521 sshd[4772]: Accepted publickey for core from 10.200.16.10 port 55742 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:32.926704 sshd-session[4772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:32.930989 systemd-logind[1697]: New session 16 of user core. Sep 4 23:48:32.943376 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 23:48:33.331892 sshd[4774]: Connection closed by 10.200.16.10 port 55742 Sep 4 23:48:33.330991 sshd-session[4772]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:33.334827 systemd[1]: sshd@13-10.200.20.4:22-10.200.16.10:55742.service: Deactivated successfully. Sep 4 23:48:33.337163 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 23:48:33.338282 systemd-logind[1697]: Session 16 logged out. Waiting for processes to exit. Sep 4 23:48:33.339565 systemd-logind[1697]: Removed session 16. Sep 4 23:48:36.571805 update_engine[1699]: I20250904 23:48:36.571746 1699 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 4 23:48:36.571805 update_engine[1699]: I20250904 23:48:36.571799 1699 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 4 23:48:36.572228 update_engine[1699]: I20250904 23:48:36.571965 1699 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 4 23:48:36.572433 update_engine[1699]: I20250904 23:48:36.572402 1699 omaha_request_params.cc:62] Current group set to stable Sep 4 23:48:36.572767 update_engine[1699]: I20250904 23:48:36.572503 1699 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 4 23:48:36.572767 update_engine[1699]: I20250904 23:48:36.572517 1699 update_attempter.cc:643] Scheduling an action processor start. Sep 4 23:48:36.572767 update_engine[1699]: I20250904 23:48:36.572574 1699 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 4 23:48:36.572767 update_engine[1699]: I20250904 23:48:36.572609 1699 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 4 23:48:36.572767 update_engine[1699]: I20250904 23:48:36.572674 1699 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 4 23:48:36.572767 update_engine[1699]: I20250904 23:48:36.572681 1699 omaha_request_action.cc:272] Request: Sep 4 23:48:36.572767 update_engine[1699]: Sep 4 23:48:36.572767 update_engine[1699]: Sep 4 23:48:36.572767 update_engine[1699]: Sep 4 23:48:36.572767 update_engine[1699]: Sep 4 23:48:36.572767 update_engine[1699]: Sep 4 23:48:36.572767 update_engine[1699]: Sep 4 23:48:36.572767 update_engine[1699]: Sep 4 23:48:36.572767 update_engine[1699]: Sep 4 23:48:36.572767 update_engine[1699]: I20250904 23:48:36.572688 1699 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 23:48:36.573703 locksmithd[1803]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 4 23:48:36.574036 update_engine[1699]: I20250904 23:48:36.574003 1699 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 23:48:36.574431 update_engine[1699]: I20250904 23:48:36.574393 1699 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 23:48:36.649381 update_engine[1699]: E20250904 23:48:36.649319 1699 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 23:48:36.649567 update_engine[1699]: I20250904 23:48:36.649432 1699 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 4 23:48:38.414996 systemd[1]: Started sshd@14-10.200.20.4:22-10.200.16.10:55750.service - OpenSSH per-connection server daemon (10.200.16.10:55750). Sep 4 23:48:38.875843 sshd[4787]: Accepted publickey for core from 10.200.16.10 port 55750 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:38.877296 sshd-session[4787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:38.881417 systemd-logind[1697]: New session 17 of user core. Sep 4 23:48:38.886386 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 23:48:39.290159 sshd[4789]: Connection closed by 10.200.16.10 port 55750 Sep 4 23:48:39.290732 sshd-session[4787]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:39.294700 systemd[1]: sshd@14-10.200.20.4:22-10.200.16.10:55750.service: Deactivated successfully. Sep 4 23:48:39.299437 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 23:48:39.300567 systemd-logind[1697]: Session 17 logged out. Waiting for processes to exit. Sep 4 23:48:39.303043 systemd-logind[1697]: Removed session 17. Sep 4 23:48:39.377654 systemd[1]: Started sshd@15-10.200.20.4:22-10.200.16.10:55754.service - OpenSSH per-connection server daemon (10.200.16.10:55754). Sep 4 23:48:39.833929 sshd[4801]: Accepted publickey for core from 10.200.16.10 port 55754 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:39.835603 sshd-session[4801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:39.841314 systemd-logind[1697]: New session 18 of user core. Sep 4 23:48:39.847425 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 23:48:40.279415 sshd[4803]: Connection closed by 10.200.16.10 port 55754 Sep 4 23:48:40.280077 sshd-session[4801]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:40.284752 systemd[1]: sshd@15-10.200.20.4:22-10.200.16.10:55754.service: Deactivated successfully. Sep 4 23:48:40.287120 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 23:48:40.288266 systemd-logind[1697]: Session 18 logged out. Waiting for processes to exit. Sep 4 23:48:40.289577 systemd-logind[1697]: Removed session 18. Sep 4 23:48:40.370723 systemd[1]: Started sshd@16-10.200.20.4:22-10.200.16.10:59568.service - OpenSSH per-connection server daemon (10.200.16.10:59568). Sep 4 23:48:40.827658 sshd[4813]: Accepted publickey for core from 10.200.16.10 port 59568 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:40.829084 sshd-session[4813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:40.834759 systemd-logind[1697]: New session 19 of user core. Sep 4 23:48:40.841395 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 23:48:41.779001 sshd[4815]: Connection closed by 10.200.16.10 port 59568 Sep 4 23:48:41.779454 sshd-session[4813]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:41.782957 systemd[1]: sshd@16-10.200.20.4:22-10.200.16.10:59568.service: Deactivated successfully. Sep 4 23:48:41.787853 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 23:48:41.791403 systemd-logind[1697]: Session 19 logged out. Waiting for processes to exit. Sep 4 23:48:41.793813 systemd-logind[1697]: Removed session 19. Sep 4 23:48:41.872479 systemd[1]: Started sshd@17-10.200.20.4:22-10.200.16.10:59572.service - OpenSSH per-connection server daemon (10.200.16.10:59572). Sep 4 23:48:42.369536 sshd[4832]: Accepted publickey for core from 10.200.16.10 port 59572 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:42.371587 sshd-session[4832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:42.376361 systemd-logind[1697]: New session 20 of user core. Sep 4 23:48:42.386386 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 23:48:42.911287 sshd[4834]: Connection closed by 10.200.16.10 port 59572 Sep 4 23:48:42.911161 sshd-session[4832]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:42.914321 systemd-logind[1697]: Session 20 logged out. Waiting for processes to exit. Sep 4 23:48:42.914484 systemd[1]: sshd@17-10.200.20.4:22-10.200.16.10:59572.service: Deactivated successfully. Sep 4 23:48:42.917004 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 23:48:42.919534 systemd-logind[1697]: Removed session 20. Sep 4 23:48:43.005624 systemd[1]: Started sshd@18-10.200.20.4:22-10.200.16.10:59578.service - OpenSSH per-connection server daemon (10.200.16.10:59578). Sep 4 23:48:43.504986 sshd[4844]: Accepted publickey for core from 10.200.16.10 port 59578 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:43.506373 sshd-session[4844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:43.512273 systemd-logind[1697]: New session 21 of user core. Sep 4 23:48:43.520382 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 23:48:43.938407 sshd[4846]: Connection closed by 10.200.16.10 port 59578 Sep 4 23:48:43.941323 sshd-session[4844]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:43.944772 systemd[1]: sshd@18-10.200.20.4:22-10.200.16.10:59578.service: Deactivated successfully. Sep 4 23:48:43.947241 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 23:48:43.948096 systemd-logind[1697]: Session 21 logged out. Waiting for processes to exit. Sep 4 23:48:43.949114 systemd-logind[1697]: Removed session 21. Sep 4 23:48:46.571232 update_engine[1699]: I20250904 23:48:46.570942 1699 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 23:48:46.571574 update_engine[1699]: I20250904 23:48:46.571270 1699 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 23:48:46.571574 update_engine[1699]: I20250904 23:48:46.571532 1699 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 23:48:46.655738 update_engine[1699]: E20250904 23:48:46.655682 1699 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 23:48:46.655825 update_engine[1699]: I20250904 23:48:46.655771 1699 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 4 23:48:49.021803 systemd[1]: Started sshd@19-10.200.20.4:22-10.200.16.10:59582.service - OpenSSH per-connection server daemon (10.200.16.10:59582). Sep 4 23:48:49.483216 sshd[4859]: Accepted publickey for core from 10.200.16.10 port 59582 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:49.484175 sshd-session[4859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:49.488997 systemd-logind[1697]: New session 22 of user core. Sep 4 23:48:49.497375 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 23:48:49.887617 sshd[4861]: Connection closed by 10.200.16.10 port 59582 Sep 4 23:48:49.888228 sshd-session[4859]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:49.892151 systemd[1]: sshd@19-10.200.20.4:22-10.200.16.10:59582.service: Deactivated successfully. Sep 4 23:48:49.894498 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 23:48:49.895347 systemd-logind[1697]: Session 22 logged out. Waiting for processes to exit. Sep 4 23:48:49.897629 systemd-logind[1697]: Removed session 22. Sep 4 23:48:54.972531 systemd[1]: Started sshd@20-10.200.20.4:22-10.200.16.10:50070.service - OpenSSH per-connection server daemon (10.200.16.10:50070). Sep 4 23:48:55.439422 sshd[4872]: Accepted publickey for core from 10.200.16.10 port 50070 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:55.440687 sshd-session[4872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:55.445292 systemd-logind[1697]: New session 23 of user core. Sep 4 23:48:55.452457 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 23:48:55.849279 sshd[4874]: Connection closed by 10.200.16.10 port 50070 Sep 4 23:48:55.850720 sshd-session[4872]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:55.856293 systemd-logind[1697]: Session 23 logged out. Waiting for processes to exit. Sep 4 23:48:55.857093 systemd[1]: sshd@20-10.200.20.4:22-10.200.16.10:50070.service: Deactivated successfully. Sep 4 23:48:55.860888 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 23:48:55.862749 systemd-logind[1697]: Removed session 23. Sep 4 23:48:55.938795 systemd[1]: Started sshd@21-10.200.20.4:22-10.200.16.10:50084.service - OpenSSH per-connection server daemon (10.200.16.10:50084). Sep 4 23:48:56.438334 sshd[4885]: Accepted publickey for core from 10.200.16.10 port 50084 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:56.439663 sshd-session[4885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:56.445316 systemd-logind[1697]: New session 24 of user core. Sep 4 23:48:56.450404 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 23:48:56.568504 update_engine[1699]: I20250904 23:48:56.568434 1699 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 23:48:56.568848 update_engine[1699]: I20250904 23:48:56.568660 1699 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 23:48:56.568949 update_engine[1699]: I20250904 23:48:56.568917 1699 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 23:48:56.675974 update_engine[1699]: E20250904 23:48:56.675915 1699 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 23:48:56.676107 update_engine[1699]: I20250904 23:48:56.676009 1699 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 4 23:48:58.827019 systemd[1]: run-containerd-runc-k8s.io-7b0fd58a7396f35505bbe9c3bde94c769d75cf8e15ed077b56bccbec90b08774-runc.yWD3mG.mount: Deactivated successfully. Sep 4 23:48:58.833788 containerd[1714]: time="2025-09-04T23:48:58.833573788Z" level=info msg="StopContainer for \"55a330d6690f79fa5d40f336ae7cd002f3276918a994311b11028dcb49aebdf3\" with timeout 30 (s)" Sep 4 23:48:58.836519 containerd[1714]: time="2025-09-04T23:48:58.834463507Z" level=info msg="Stop container \"55a330d6690f79fa5d40f336ae7cd002f3276918a994311b11028dcb49aebdf3\" with signal terminated" Sep 4 23:48:58.850245 containerd[1714]: time="2025-09-04T23:48:58.850162782Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 23:48:58.856666 systemd[1]: cri-containerd-55a330d6690f79fa5d40f336ae7cd002f3276918a994311b11028dcb49aebdf3.scope: Deactivated successfully. Sep 4 23:48:58.863624 containerd[1714]: time="2025-09-04T23:48:58.863579617Z" level=info msg="StopContainer for \"7b0fd58a7396f35505bbe9c3bde94c769d75cf8e15ed077b56bccbec90b08774\" with timeout 2 (s)" Sep 4 23:48:58.863997 containerd[1714]: time="2025-09-04T23:48:58.863977097Z" level=info msg="Stop container \"7b0fd58a7396f35505bbe9c3bde94c769d75cf8e15ed077b56bccbec90b08774\" with signal terminated" Sep 4 23:48:58.874247 systemd-networkd[1609]: lxc_health: Link DOWN Sep 4 23:48:58.874254 systemd-networkd[1609]: lxc_health: Lost carrier Sep 4 23:48:58.891785 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55a330d6690f79fa5d40f336ae7cd002f3276918a994311b11028dcb49aebdf3-rootfs.mount: Deactivated successfully. Sep 4 23:48:58.892942 systemd[1]: cri-containerd-7b0fd58a7396f35505bbe9c3bde94c769d75cf8e15ed077b56bccbec90b08774.scope: Deactivated successfully. Sep 4 23:48:58.895560 systemd[1]: cri-containerd-7b0fd58a7396f35505bbe9c3bde94c769d75cf8e15ed077b56bccbec90b08774.scope: Consumed 6.756s CPU time, 124.8M memory peak, 128K read from disk, 12.9M written to disk. Sep 4 23:48:58.916839 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b0fd58a7396f35505bbe9c3bde94c769d75cf8e15ed077b56bccbec90b08774-rootfs.mount: Deactivated successfully. Sep 4 23:48:58.941741 containerd[1714]: time="2025-09-04T23:48:58.941659909Z" level=info msg="shim disconnected" id=7b0fd58a7396f35505bbe9c3bde94c769d75cf8e15ed077b56bccbec90b08774 namespace=k8s.io Sep 4 23:48:58.942441 containerd[1714]: time="2025-09-04T23:48:58.942128149Z" level=warning msg="cleaning up after shim disconnected" id=7b0fd58a7396f35505bbe9c3bde94c769d75cf8e15ed077b56bccbec90b08774 namespace=k8s.io Sep 4 23:48:58.942441 containerd[1714]: time="2025-09-04T23:48:58.942149549Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:48:58.942721 containerd[1714]: time="2025-09-04T23:48:58.942676068Z" level=info msg="shim disconnected" id=55a330d6690f79fa5d40f336ae7cd002f3276918a994311b11028dcb49aebdf3 namespace=k8s.io Sep 4 23:48:58.942946 containerd[1714]: time="2025-09-04T23:48:58.942843548Z" level=warning msg="cleaning up after shim disconnected" id=55a330d6690f79fa5d40f336ae7cd002f3276918a994311b11028dcb49aebdf3 namespace=k8s.io Sep 4 23:48:58.942946 containerd[1714]: time="2025-09-04T23:48:58.942880148Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:48:58.968759 containerd[1714]: time="2025-09-04T23:48:58.968609499Z" level=info msg="StopContainer for \"55a330d6690f79fa5d40f336ae7cd002f3276918a994311b11028dcb49aebdf3\" returns successfully" Sep 4 23:48:58.971245 containerd[1714]: time="2025-09-04T23:48:58.969299419Z" level=info msg="StopPodSandbox for \"5c8c0c3804de8bff1775e8497bc5a61916431e795b2f4ea6ddf8277b48c458a3\"" Sep 4 23:48:58.971245 containerd[1714]: time="2025-09-04T23:48:58.969338699Z" level=info msg="Container to stop \"55a330d6690f79fa5d40f336ae7cd002f3276918a994311b11028dcb49aebdf3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:48:58.971725 containerd[1714]: time="2025-09-04T23:48:58.971617858Z" level=info msg="StopContainer for \"7b0fd58a7396f35505bbe9c3bde94c769d75cf8e15ed077b56bccbec90b08774\" returns successfully" Sep 4 23:48:58.971839 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5c8c0c3804de8bff1775e8497bc5a61916431e795b2f4ea6ddf8277b48c458a3-shm.mount: Deactivated successfully. Sep 4 23:48:58.973681 containerd[1714]: time="2025-09-04T23:48:58.973270297Z" level=info msg="StopPodSandbox for \"e3fa3d36d42f959bd22ce4d9a7012f955252203e9a3d7665b792c0993dd54464\"" Sep 4 23:48:58.973681 containerd[1714]: time="2025-09-04T23:48:58.973318897Z" level=info msg="Container to stop \"48f498a8f7ea50b3b74a8e248cd166358b7a6f3870c82fb2fddceea8bb35abdb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:48:58.973681 containerd[1714]: time="2025-09-04T23:48:58.973330897Z" level=info msg="Container to stop \"7b0fd58a7396f35505bbe9c3bde94c769d75cf8e15ed077b56bccbec90b08774\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:48:58.973681 containerd[1714]: time="2025-09-04T23:48:58.973340577Z" level=info msg="Container to stop \"2301495732ae72052663a94747424af43ac35ad746315edf4889e8cf7240355c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:48:58.973681 containerd[1714]: time="2025-09-04T23:48:58.973349337Z" level=info msg="Container to stop \"923334bf46c70c8fcfef56ab1e8ae5016c37f78aae850852403ea568231101dd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:48:58.973681 containerd[1714]: time="2025-09-04T23:48:58.973358377Z" level=info msg="Container to stop \"79c81ded93a1e06817f88611038a22b63919ce8fd0269e145ba0d61e9834fa94\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:48:58.979592 systemd[1]: cri-containerd-e3fa3d36d42f959bd22ce4d9a7012f955252203e9a3d7665b792c0993dd54464.scope: Deactivated successfully. Sep 4 23:48:58.983857 systemd[1]: cri-containerd-5c8c0c3804de8bff1775e8497bc5a61916431e795b2f4ea6ddf8277b48c458a3.scope: Deactivated successfully. Sep 4 23:48:59.026321 containerd[1714]: time="2025-09-04T23:48:59.025138599Z" level=info msg="shim disconnected" id=e3fa3d36d42f959bd22ce4d9a7012f955252203e9a3d7665b792c0993dd54464 namespace=k8s.io Sep 4 23:48:59.026321 containerd[1714]: time="2025-09-04T23:48:59.026068158Z" level=warning msg="cleaning up after shim disconnected" id=e3fa3d36d42f959bd22ce4d9a7012f955252203e9a3d7665b792c0993dd54464 namespace=k8s.io Sep 4 23:48:59.026321 containerd[1714]: time="2025-09-04T23:48:59.026079838Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:48:59.026321 containerd[1714]: time="2025-09-04T23:48:59.026116718Z" level=info msg="shim disconnected" id=5c8c0c3804de8bff1775e8497bc5a61916431e795b2f4ea6ddf8277b48c458a3 namespace=k8s.io Sep 4 23:48:59.026321 containerd[1714]: time="2025-09-04T23:48:59.026166758Z" level=warning msg="cleaning up after shim disconnected" id=5c8c0c3804de8bff1775e8497bc5a61916431e795b2f4ea6ddf8277b48c458a3 namespace=k8s.io Sep 4 23:48:59.026321 containerd[1714]: time="2025-09-04T23:48:59.026175878Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:48:59.043117 containerd[1714]: time="2025-09-04T23:48:59.042849072Z" level=info msg="TearDown network for sandbox \"5c8c0c3804de8bff1775e8497bc5a61916431e795b2f4ea6ddf8277b48c458a3\" successfully" Sep 4 23:48:59.043117 containerd[1714]: time="2025-09-04T23:48:59.042887352Z" level=info msg="StopPodSandbox for \"5c8c0c3804de8bff1775e8497bc5a61916431e795b2f4ea6ddf8277b48c458a3\" returns successfully" Sep 4 23:48:59.043117 containerd[1714]: time="2025-09-04T23:48:59.043034472Z" level=info msg="TearDown network for sandbox \"e3fa3d36d42f959bd22ce4d9a7012f955252203e9a3d7665b792c0993dd54464\" successfully" Sep 4 23:48:59.043117 containerd[1714]: time="2025-09-04T23:48:59.043055232Z" level=info msg="StopPodSandbox for \"e3fa3d36d42f959bd22ce4d9a7012f955252203e9a3d7665b792c0993dd54464\" returns successfully" Sep 4 23:48:59.143313 kubelet[3284]: I0904 23:48:59.142397 3284 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6a71ac55-f645-43f5-871a-0903af937eb0-cilium-config-path\") pod \"6a71ac55-f645-43f5-871a-0903af937eb0\" (UID: \"6a71ac55-f645-43f5-871a-0903af937eb0\") " Sep 4 23:48:59.143313 kubelet[3284]: I0904 23:48:59.142445 3284 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-xtables-lock\") pod \"9bd97563-dcf2-4b9c-bef3-cdce5f215b9f\" (UID: \"9bd97563-dcf2-4b9c-bef3-cdce5f215b9f\") " Sep 4 23:48:59.143313 kubelet[3284]: I0904 23:48:59.142464 3284 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-host-proc-sys-kernel\") pod \"9bd97563-dcf2-4b9c-bef3-cdce5f215b9f\" (UID: \"9bd97563-dcf2-4b9c-bef3-cdce5f215b9f\") " Sep 4 23:48:59.143313 kubelet[3284]: I0904 23:48:59.142484 3284 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdbpc\" (UniqueName: \"kubernetes.io/projected/6a71ac55-f645-43f5-871a-0903af937eb0-kube-api-access-qdbpc\") pod \"6a71ac55-f645-43f5-871a-0903af937eb0\" (UID: \"6a71ac55-f645-43f5-871a-0903af937eb0\") " Sep 4 23:48:59.143313 kubelet[3284]: I0904 23:48:59.142497 3284 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-hostproc\") pod \"9bd97563-dcf2-4b9c-bef3-cdce5f215b9f\" (UID: \"9bd97563-dcf2-4b9c-bef3-cdce5f215b9f\") " Sep 4 23:48:59.143313 kubelet[3284]: I0904 23:48:59.142514 3284 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsp5h\" (UniqueName: \"kubernetes.io/projected/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-kube-api-access-fsp5h\") pod \"9bd97563-dcf2-4b9c-bef3-cdce5f215b9f\" (UID: \"9bd97563-dcf2-4b9c-bef3-cdce5f215b9f\") " Sep 4 23:48:59.143799 kubelet[3284]: I0904 23:48:59.142539 3284 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-cilium-config-path\") pod \"9bd97563-dcf2-4b9c-bef3-cdce5f215b9f\" (UID: \"9bd97563-dcf2-4b9c-bef3-cdce5f215b9f\") " Sep 4 23:48:59.143799 kubelet[3284]: I0904 23:48:59.142555 3284 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-bpf-maps\") pod \"9bd97563-dcf2-4b9c-bef3-cdce5f215b9f\" (UID: \"9bd97563-dcf2-4b9c-bef3-cdce5f215b9f\") " Sep 4 23:48:59.143799 kubelet[3284]: I0904 23:48:59.142571 3284 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-clustermesh-secrets\") pod \"9bd97563-dcf2-4b9c-bef3-cdce5f215b9f\" (UID: \"9bd97563-dcf2-4b9c-bef3-cdce5f215b9f\") " Sep 4 23:48:59.143799 kubelet[3284]: I0904 23:48:59.142584 3284 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-cni-path\") pod \"9bd97563-dcf2-4b9c-bef3-cdce5f215b9f\" (UID: \"9bd97563-dcf2-4b9c-bef3-cdce5f215b9f\") " Sep 4 23:48:59.143799 kubelet[3284]: I0904 23:48:59.142601 3284 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-host-proc-sys-net\") pod \"9bd97563-dcf2-4b9c-bef3-cdce5f215b9f\" (UID: \"9bd97563-dcf2-4b9c-bef3-cdce5f215b9f\") " Sep 4 23:48:59.143799 kubelet[3284]: I0904 23:48:59.142615 3284 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-cilium-cgroup\") pod \"9bd97563-dcf2-4b9c-bef3-cdce5f215b9f\" (UID: \"9bd97563-dcf2-4b9c-bef3-cdce5f215b9f\") " Sep 4 23:48:59.143928 kubelet[3284]: I0904 23:48:59.142633 3284 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-lib-modules\") pod \"9bd97563-dcf2-4b9c-bef3-cdce5f215b9f\" (UID: \"9bd97563-dcf2-4b9c-bef3-cdce5f215b9f\") " Sep 4 23:48:59.143928 kubelet[3284]: I0904 23:48:59.142680 3284 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-hubble-tls\") pod \"9bd97563-dcf2-4b9c-bef3-cdce5f215b9f\" (UID: \"9bd97563-dcf2-4b9c-bef3-cdce5f215b9f\") " Sep 4 23:48:59.143928 kubelet[3284]: I0904 23:48:59.142702 3284 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-cilium-run\") pod \"9bd97563-dcf2-4b9c-bef3-cdce5f215b9f\" (UID: \"9bd97563-dcf2-4b9c-bef3-cdce5f215b9f\") " Sep 4 23:48:59.143928 kubelet[3284]: I0904 23:48:59.142716 3284 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-etc-cni-netd\") pod \"9bd97563-dcf2-4b9c-bef3-cdce5f215b9f\" (UID: \"9bd97563-dcf2-4b9c-bef3-cdce5f215b9f\") " Sep 4 23:48:59.143928 kubelet[3284]: I0904 23:48:59.142816 3284 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9bd97563-dcf2-4b9c-bef3-cdce5f215b9f" (UID: "9bd97563-dcf2-4b9c-bef3-cdce5f215b9f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:48:59.143928 kubelet[3284]: I0904 23:48:59.143267 3284 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9bd97563-dcf2-4b9c-bef3-cdce5f215b9f" (UID: "9bd97563-dcf2-4b9c-bef3-cdce5f215b9f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:48:59.144049 kubelet[3284]: I0904 23:48:59.143307 3284 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9bd97563-dcf2-4b9c-bef3-cdce5f215b9f" (UID: "9bd97563-dcf2-4b9c-bef3-cdce5f215b9f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:48:59.144049 kubelet[3284]: I0904 23:48:59.143323 3284 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9bd97563-dcf2-4b9c-bef3-cdce5f215b9f" (UID: "9bd97563-dcf2-4b9c-bef3-cdce5f215b9f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:48:59.145913 kubelet[3284]: I0904 23:48:59.145683 3284 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-hostproc" (OuterVolumeSpecName: "hostproc") pod "9bd97563-dcf2-4b9c-bef3-cdce5f215b9f" (UID: "9bd97563-dcf2-4b9c-bef3-cdce5f215b9f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:48:59.148700 kubelet[3284]: I0904 23:48:59.148654 3284 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-cni-path" (OuterVolumeSpecName: "cni-path") pod "9bd97563-dcf2-4b9c-bef3-cdce5f215b9f" (UID: "9bd97563-dcf2-4b9c-bef3-cdce5f215b9f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:48:59.149134 kubelet[3284]: I0904 23:48:59.148848 3284 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9bd97563-dcf2-4b9c-bef3-cdce5f215b9f" (UID: "9bd97563-dcf2-4b9c-bef3-cdce5f215b9f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:48:59.149134 kubelet[3284]: I0904 23:48:59.148871 3284 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9bd97563-dcf2-4b9c-bef3-cdce5f215b9f" (UID: "9bd97563-dcf2-4b9c-bef3-cdce5f215b9f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:48:59.149134 kubelet[3284]: I0904 23:48:59.148885 3284 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9bd97563-dcf2-4b9c-bef3-cdce5f215b9f" (UID: "9bd97563-dcf2-4b9c-bef3-cdce5f215b9f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:48:59.149733 kubelet[3284]: I0904 23:48:59.149676 3284 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9bd97563-dcf2-4b9c-bef3-cdce5f215b9f" (UID: "9bd97563-dcf2-4b9c-bef3-cdce5f215b9f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 23:48:59.150279 kubelet[3284]: I0904 23:48:59.150024 3284 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9bd97563-dcf2-4b9c-bef3-cdce5f215b9f" (UID: "9bd97563-dcf2-4b9c-bef3-cdce5f215b9f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:48:59.150279 kubelet[3284]: I0904 23:48:59.150135 3284 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a71ac55-f645-43f5-871a-0903af937eb0-kube-api-access-qdbpc" (OuterVolumeSpecName: "kube-api-access-qdbpc") pod "6a71ac55-f645-43f5-871a-0903af937eb0" (UID: "6a71ac55-f645-43f5-871a-0903af937eb0"). InnerVolumeSpecName "kube-api-access-qdbpc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 23:48:59.150486 kubelet[3284]: I0904 23:48:59.150463 3284 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9bd97563-dcf2-4b9c-bef3-cdce5f215b9f" (UID: "9bd97563-dcf2-4b9c-bef3-cdce5f215b9f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 4 23:48:59.151747 kubelet[3284]: I0904 23:48:59.151630 3284 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a71ac55-f645-43f5-871a-0903af937eb0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6a71ac55-f645-43f5-871a-0903af937eb0" (UID: "6a71ac55-f645-43f5-871a-0903af937eb0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 23:48:59.151747 kubelet[3284]: I0904 23:48:59.151707 3284 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-kube-api-access-fsp5h" (OuterVolumeSpecName: "kube-api-access-fsp5h") pod "9bd97563-dcf2-4b9c-bef3-cdce5f215b9f" (UID: "9bd97563-dcf2-4b9c-bef3-cdce5f215b9f"). InnerVolumeSpecName "kube-api-access-fsp5h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 23:48:59.153783 kubelet[3284]: I0904 23:48:59.153750 3284 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9bd97563-dcf2-4b9c-bef3-cdce5f215b9f" (UID: "9bd97563-dcf2-4b9c-bef3-cdce5f215b9f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 23:48:59.243286 kubelet[3284]: I0904 23:48:59.243249 3284 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-host-proc-sys-kernel\") on node \"ci-4230.2.2-n-a8c1fd94a3\" DevicePath \"\"" Sep 4 23:48:59.243476 kubelet[3284]: I0904 23:48:59.243464 3284 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qdbpc\" (UniqueName: \"kubernetes.io/projected/6a71ac55-f645-43f5-871a-0903af937eb0-kube-api-access-qdbpc\") on node \"ci-4230.2.2-n-a8c1fd94a3\" DevicePath \"\"" Sep 4 23:48:59.243616 kubelet[3284]: I0904 23:48:59.243561 3284 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-hostproc\") on node \"ci-4230.2.2-n-a8c1fd94a3\" DevicePath \"\"" Sep 4 23:48:59.243616 kubelet[3284]: I0904 23:48:59.243577 3284 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fsp5h\" (UniqueName: \"kubernetes.io/projected/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-kube-api-access-fsp5h\") on node \"ci-4230.2.2-n-a8c1fd94a3\" DevicePath \"\"" Sep 4 23:48:59.243616 kubelet[3284]: I0904 23:48:59.243588 3284 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-cilium-config-path\") on node \"ci-4230.2.2-n-a8c1fd94a3\" DevicePath \"\"" Sep 4 23:48:59.243616 kubelet[3284]: I0904 23:48:59.243597 3284 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-bpf-maps\") on node \"ci-4230.2.2-n-a8c1fd94a3\" DevicePath \"\"" Sep 4 23:48:59.243845 kubelet[3284]: I0904 23:48:59.243606 3284 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-clustermesh-secrets\") on node \"ci-4230.2.2-n-a8c1fd94a3\" DevicePath \"\"" Sep 4 23:48:59.243845 kubelet[3284]: I0904 23:48:59.243760 3284 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-cni-path\") on node \"ci-4230.2.2-n-a8c1fd94a3\" DevicePath \"\"" Sep 4 23:48:59.243845 kubelet[3284]: I0904 23:48:59.243771 3284 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-host-proc-sys-net\") on node \"ci-4230.2.2-n-a8c1fd94a3\" DevicePath \"\"" Sep 4 23:48:59.243845 kubelet[3284]: I0904 23:48:59.243784 3284 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-cilium-cgroup\") on node \"ci-4230.2.2-n-a8c1fd94a3\" DevicePath \"\"" Sep 4 23:48:59.244052 kubelet[3284]: I0904 23:48:59.243794 3284 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-lib-modules\") on node \"ci-4230.2.2-n-a8c1fd94a3\" DevicePath \"\"" Sep 4 23:48:59.244052 kubelet[3284]: I0904 23:48:59.243917 3284 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-hubble-tls\") on node \"ci-4230.2.2-n-a8c1fd94a3\" DevicePath \"\"" Sep 4 23:48:59.244052 kubelet[3284]: I0904 23:48:59.243926 3284 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-cilium-run\") on node \"ci-4230.2.2-n-a8c1fd94a3\" DevicePath \"\"" Sep 4 23:48:59.244052 kubelet[3284]: I0904 23:48:59.243934 3284 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-etc-cni-netd\") on node \"ci-4230.2.2-n-a8c1fd94a3\" DevicePath \"\"" Sep 4 23:48:59.244052 kubelet[3284]: I0904 23:48:59.243941 3284 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6a71ac55-f645-43f5-871a-0903af937eb0-cilium-config-path\") on node \"ci-4230.2.2-n-a8c1fd94a3\" DevicePath \"\"" Sep 4 23:48:59.244052 kubelet[3284]: I0904 23:48:59.243950 3284 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f-xtables-lock\") on node \"ci-4230.2.2-n-a8c1fd94a3\" DevicePath \"\"" Sep 4 23:48:59.417664 systemd[1]: Removed slice kubepods-besteffort-pod6a71ac55_f645_43f5_871a_0903af937eb0.slice - libcontainer container kubepods-besteffort-pod6a71ac55_f645_43f5_871a_0903af937eb0.slice. Sep 4 23:48:59.420398 systemd[1]: Removed slice kubepods-burstable-pod9bd97563_dcf2_4b9c_bef3_cdce5f215b9f.slice - libcontainer container kubepods-burstable-pod9bd97563_dcf2_4b9c_bef3_cdce5f215b9f.slice. Sep 4 23:48:59.420503 systemd[1]: kubepods-burstable-pod9bd97563_dcf2_4b9c_bef3_cdce5f215b9f.slice: Consumed 6.830s CPU time, 125.2M memory peak, 128K read from disk, 12.9M written to disk. Sep 4 23:48:59.800291 kubelet[3284]: I0904 23:48:59.800035 3284 scope.go:117] "RemoveContainer" containerID="55a330d6690f79fa5d40f336ae7cd002f3276918a994311b11028dcb49aebdf3" Sep 4 23:48:59.804806 containerd[1714]: time="2025-09-04T23:48:59.804372597Z" level=info msg="RemoveContainer for \"55a330d6690f79fa5d40f336ae7cd002f3276918a994311b11028dcb49aebdf3\"" Sep 4 23:48:59.825124 containerd[1714]: time="2025-09-04T23:48:59.823323470Z" level=info msg="RemoveContainer for \"55a330d6690f79fa5d40f336ae7cd002f3276918a994311b11028dcb49aebdf3\" returns successfully" Sep 4 23:48:59.823419 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3fa3d36d42f959bd22ce4d9a7012f955252203e9a3d7665b792c0993dd54464-rootfs.mount: Deactivated successfully. Sep 4 23:48:59.823566 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e3fa3d36d42f959bd22ce4d9a7012f955252203e9a3d7665b792c0993dd54464-shm.mount: Deactivated successfully. Sep 4 23:48:59.823653 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c8c0c3804de8bff1775e8497bc5a61916431e795b2f4ea6ddf8277b48c458a3-rootfs.mount: Deactivated successfully. Sep 4 23:48:59.823736 systemd[1]: var-lib-kubelet-pods-6a71ac55\x2df645\x2d43f5\x2d871a\x2d0903af937eb0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqdbpc.mount: Deactivated successfully. Sep 4 23:48:59.823820 systemd[1]: var-lib-kubelet-pods-9bd97563\x2ddcf2\x2d4b9c\x2dbef3\x2dcdce5f215b9f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfsp5h.mount: Deactivated successfully. Sep 4 23:48:59.823891 systemd[1]: var-lib-kubelet-pods-9bd97563\x2ddcf2\x2d4b9c\x2dbef3\x2dcdce5f215b9f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 4 23:48:59.823959 systemd[1]: var-lib-kubelet-pods-9bd97563\x2ddcf2\x2d4b9c\x2dbef3\x2dcdce5f215b9f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 4 23:48:59.826878 kubelet[3284]: I0904 23:48:59.825767 3284 scope.go:117] "RemoveContainer" containerID="55a330d6690f79fa5d40f336ae7cd002f3276918a994311b11028dcb49aebdf3" Sep 4 23:48:59.827261 containerd[1714]: time="2025-09-04T23:48:59.826401709Z" level=error msg="ContainerStatus for \"55a330d6690f79fa5d40f336ae7cd002f3276918a994311b11028dcb49aebdf3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"55a330d6690f79fa5d40f336ae7cd002f3276918a994311b11028dcb49aebdf3\": not found" Sep 4 23:48:59.828162 kubelet[3284]: E0904 23:48:59.827370 3284 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"55a330d6690f79fa5d40f336ae7cd002f3276918a994311b11028dcb49aebdf3\": not found" containerID="55a330d6690f79fa5d40f336ae7cd002f3276918a994311b11028dcb49aebdf3" Sep 4 23:48:59.828162 kubelet[3284]: I0904 23:48:59.827423 3284 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"55a330d6690f79fa5d40f336ae7cd002f3276918a994311b11028dcb49aebdf3"} err="failed to get container status \"55a330d6690f79fa5d40f336ae7cd002f3276918a994311b11028dcb49aebdf3\": rpc error: code = NotFound desc = an error occurred when try to find container \"55a330d6690f79fa5d40f336ae7cd002f3276918a994311b11028dcb49aebdf3\": not found" Sep 4 23:48:59.828162 kubelet[3284]: I0904 23:48:59.827468 3284 scope.go:117] "RemoveContainer" containerID="7b0fd58a7396f35505bbe9c3bde94c769d75cf8e15ed077b56bccbec90b08774" Sep 4 23:48:59.829723 containerd[1714]: time="2025-09-04T23:48:59.828945068Z" level=info msg="RemoveContainer for \"7b0fd58a7396f35505bbe9c3bde94c769d75cf8e15ed077b56bccbec90b08774\"" Sep 4 23:48:59.839470 containerd[1714]: time="2025-09-04T23:48:59.839425785Z" level=info msg="RemoveContainer for \"7b0fd58a7396f35505bbe9c3bde94c769d75cf8e15ed077b56bccbec90b08774\" returns successfully" Sep 4 23:48:59.839861 kubelet[3284]: I0904 23:48:59.839661 3284 scope.go:117] "RemoveContainer" containerID="48f498a8f7ea50b3b74a8e248cd166358b7a6f3870c82fb2fddceea8bb35abdb" Sep 4 23:48:59.840836 containerd[1714]: time="2025-09-04T23:48:59.840802464Z" level=info msg="RemoveContainer for \"48f498a8f7ea50b3b74a8e248cd166358b7a6f3870c82fb2fddceea8bb35abdb\"" Sep 4 23:48:59.850787 containerd[1714]: time="2025-09-04T23:48:59.850732740Z" level=info msg="RemoveContainer for \"48f498a8f7ea50b3b74a8e248cd166358b7a6f3870c82fb2fddceea8bb35abdb\" returns successfully" Sep 4 23:48:59.851209 kubelet[3284]: I0904 23:48:59.851066 3284 scope.go:117] "RemoveContainer" containerID="79c81ded93a1e06817f88611038a22b63919ce8fd0269e145ba0d61e9834fa94" Sep 4 23:48:59.852501 containerd[1714]: time="2025-09-04T23:48:59.852469260Z" level=info msg="RemoveContainer for \"79c81ded93a1e06817f88611038a22b63919ce8fd0269e145ba0d61e9834fa94\"" Sep 4 23:48:59.861583 containerd[1714]: time="2025-09-04T23:48:59.861529617Z" level=info msg="RemoveContainer for \"79c81ded93a1e06817f88611038a22b63919ce8fd0269e145ba0d61e9834fa94\" returns successfully" Sep 4 23:48:59.861982 kubelet[3284]: I0904 23:48:59.861817 3284 scope.go:117] "RemoveContainer" containerID="923334bf46c70c8fcfef56ab1e8ae5016c37f78aae850852403ea568231101dd" Sep 4 23:48:59.863180 containerd[1714]: time="2025-09-04T23:48:59.863117736Z" level=info msg="RemoveContainer for \"923334bf46c70c8fcfef56ab1e8ae5016c37f78aae850852403ea568231101dd\"" Sep 4 23:48:59.871696 containerd[1714]: time="2025-09-04T23:48:59.871639453Z" level=info msg="RemoveContainer for \"923334bf46c70c8fcfef56ab1e8ae5016c37f78aae850852403ea568231101dd\" returns successfully" Sep 4 23:48:59.871959 kubelet[3284]: I0904 23:48:59.871930 3284 scope.go:117] "RemoveContainer" containerID="2301495732ae72052663a94747424af43ac35ad746315edf4889e8cf7240355c" Sep 4 23:48:59.873556 containerd[1714]: time="2025-09-04T23:48:59.873501532Z" level=info msg="RemoveContainer for \"2301495732ae72052663a94747424af43ac35ad746315edf4889e8cf7240355c\"" Sep 4 23:48:59.882461 containerd[1714]: time="2025-09-04T23:48:59.882419329Z" level=info msg="RemoveContainer for \"2301495732ae72052663a94747424af43ac35ad746315edf4889e8cf7240355c\" returns successfully" Sep 4 23:48:59.883139 kubelet[3284]: I0904 23:48:59.882822 3284 scope.go:117] "RemoveContainer" containerID="7b0fd58a7396f35505bbe9c3bde94c769d75cf8e15ed077b56bccbec90b08774" Sep 4 23:48:59.883247 containerd[1714]: time="2025-09-04T23:48:59.883076889Z" level=error msg="ContainerStatus for \"7b0fd58a7396f35505bbe9c3bde94c769d75cf8e15ed077b56bccbec90b08774\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7b0fd58a7396f35505bbe9c3bde94c769d75cf8e15ed077b56bccbec90b08774\": not found" Sep 4 23:48:59.883503 kubelet[3284]: E0904 23:48:59.883382 3284 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7b0fd58a7396f35505bbe9c3bde94c769d75cf8e15ed077b56bccbec90b08774\": not found" containerID="7b0fd58a7396f35505bbe9c3bde94c769d75cf8e15ed077b56bccbec90b08774" Sep 4 23:48:59.883503 kubelet[3284]: I0904 23:48:59.883413 3284 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7b0fd58a7396f35505bbe9c3bde94c769d75cf8e15ed077b56bccbec90b08774"} err="failed to get container status \"7b0fd58a7396f35505bbe9c3bde94c769d75cf8e15ed077b56bccbec90b08774\": rpc error: code = NotFound desc = an error occurred when try to find container \"7b0fd58a7396f35505bbe9c3bde94c769d75cf8e15ed077b56bccbec90b08774\": not found" Sep 4 23:48:59.883503 kubelet[3284]: I0904 23:48:59.883434 3284 scope.go:117] "RemoveContainer" containerID="48f498a8f7ea50b3b74a8e248cd166358b7a6f3870c82fb2fddceea8bb35abdb" Sep 4 23:48:59.883768 containerd[1714]: time="2025-09-04T23:48:59.883729169Z" level=error msg="ContainerStatus for \"48f498a8f7ea50b3b74a8e248cd166358b7a6f3870c82fb2fddceea8bb35abdb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"48f498a8f7ea50b3b74a8e248cd166358b7a6f3870c82fb2fddceea8bb35abdb\": not found" Sep 4 23:48:59.883898 kubelet[3284]: E0904 23:48:59.883873 3284 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"48f498a8f7ea50b3b74a8e248cd166358b7a6f3870c82fb2fddceea8bb35abdb\": not found" containerID="48f498a8f7ea50b3b74a8e248cd166358b7a6f3870c82fb2fddceea8bb35abdb" Sep 4 23:48:59.883931 kubelet[3284]: I0904 23:48:59.883918 3284 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"48f498a8f7ea50b3b74a8e248cd166358b7a6f3870c82fb2fddceea8bb35abdb"} err="failed to get container status \"48f498a8f7ea50b3b74a8e248cd166358b7a6f3870c82fb2fddceea8bb35abdb\": rpc error: code = NotFound desc = an error occurred when try to find container \"48f498a8f7ea50b3b74a8e248cd166358b7a6f3870c82fb2fddceea8bb35abdb\": not found" Sep 4 23:48:59.883955 kubelet[3284]: I0904 23:48:59.883933 3284 scope.go:117] "RemoveContainer" containerID="79c81ded93a1e06817f88611038a22b63919ce8fd0269e145ba0d61e9834fa94" Sep 4 23:48:59.884225 containerd[1714]: time="2025-09-04T23:48:59.884195408Z" level=error msg="ContainerStatus for \"79c81ded93a1e06817f88611038a22b63919ce8fd0269e145ba0d61e9834fa94\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"79c81ded93a1e06817f88611038a22b63919ce8fd0269e145ba0d61e9834fa94\": not found" Sep 4 23:48:59.884379 kubelet[3284]: E0904 23:48:59.884352 3284 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"79c81ded93a1e06817f88611038a22b63919ce8fd0269e145ba0d61e9834fa94\": not found" containerID="79c81ded93a1e06817f88611038a22b63919ce8fd0269e145ba0d61e9834fa94" Sep 4 23:48:59.884415 kubelet[3284]: I0904 23:48:59.884384 3284 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"79c81ded93a1e06817f88611038a22b63919ce8fd0269e145ba0d61e9834fa94"} err="failed to get container status \"79c81ded93a1e06817f88611038a22b63919ce8fd0269e145ba0d61e9834fa94\": rpc error: code = NotFound desc = an error occurred when try to find container \"79c81ded93a1e06817f88611038a22b63919ce8fd0269e145ba0d61e9834fa94\": not found" Sep 4 23:48:59.884415 kubelet[3284]: I0904 23:48:59.884405 3284 scope.go:117] "RemoveContainer" containerID="923334bf46c70c8fcfef56ab1e8ae5016c37f78aae850852403ea568231101dd" Sep 4 23:48:59.884594 containerd[1714]: time="2025-09-04T23:48:59.884564448Z" level=error msg="ContainerStatus for \"923334bf46c70c8fcfef56ab1e8ae5016c37f78aae850852403ea568231101dd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"923334bf46c70c8fcfef56ab1e8ae5016c37f78aae850852403ea568231101dd\": not found" Sep 4 23:48:59.884769 kubelet[3284]: E0904 23:48:59.884689 3284 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"923334bf46c70c8fcfef56ab1e8ae5016c37f78aae850852403ea568231101dd\": not found" containerID="923334bf46c70c8fcfef56ab1e8ae5016c37f78aae850852403ea568231101dd" Sep 4 23:48:59.884803 kubelet[3284]: I0904 23:48:59.884773 3284 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"923334bf46c70c8fcfef56ab1e8ae5016c37f78aae850852403ea568231101dd"} err="failed to get container status \"923334bf46c70c8fcfef56ab1e8ae5016c37f78aae850852403ea568231101dd\": rpc error: code = NotFound desc = an error occurred when try to find container \"923334bf46c70c8fcfef56ab1e8ae5016c37f78aae850852403ea568231101dd\": not found" Sep 4 23:48:59.884803 kubelet[3284]: I0904 23:48:59.884788 3284 scope.go:117] "RemoveContainer" containerID="2301495732ae72052663a94747424af43ac35ad746315edf4889e8cf7240355c" Sep 4 23:48:59.885119 containerd[1714]: time="2025-09-04T23:48:59.885091968Z" level=error msg="ContainerStatus for \"2301495732ae72052663a94747424af43ac35ad746315edf4889e8cf7240355c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2301495732ae72052663a94747424af43ac35ad746315edf4889e8cf7240355c\": not found" Sep 4 23:48:59.885246 kubelet[3284]: E0904 23:48:59.885224 3284 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2301495732ae72052663a94747424af43ac35ad746315edf4889e8cf7240355c\": not found" containerID="2301495732ae72052663a94747424af43ac35ad746315edf4889e8cf7240355c" Sep 4 23:48:59.885276 kubelet[3284]: I0904 23:48:59.885252 3284 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2301495732ae72052663a94747424af43ac35ad746315edf4889e8cf7240355c"} err="failed to get container status \"2301495732ae72052663a94747424af43ac35ad746315edf4889e8cf7240355c\": rpc error: code = NotFound desc = an error occurred when try to find container \"2301495732ae72052663a94747424af43ac35ad746315edf4889e8cf7240355c\": not found" Sep 4 23:49:00.822778 sshd[4887]: Connection closed by 10.200.16.10 port 50084 Sep 4 23:49:00.823533 sshd-session[4885]: pam_unix(sshd:session): session closed for user core Sep 4 23:49:00.826936 systemd[1]: sshd@21-10.200.20.4:22-10.200.16.10:50084.service: Deactivated successfully. Sep 4 23:49:00.826937 systemd-logind[1697]: Session 24 logged out. Waiting for processes to exit. Sep 4 23:49:00.830750 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 23:49:00.831057 systemd[1]: session-24.scope: Consumed 1.432s CPU time, 23.5M memory peak. Sep 4 23:49:00.832961 systemd-logind[1697]: Removed session 24. Sep 4 23:49:00.921445 systemd[1]: Started sshd@22-10.200.20.4:22-10.200.16.10:59258.service - OpenSSH per-connection server daemon (10.200.16.10:59258). Sep 4 23:49:01.411774 kubelet[3284]: I0904 23:49:01.410929 3284 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a71ac55-f645-43f5-871a-0903af937eb0" path="/var/lib/kubelet/pods/6a71ac55-f645-43f5-871a-0903af937eb0/volumes" Sep 4 23:49:01.411774 kubelet[3284]: I0904 23:49:01.411342 3284 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bd97563-dcf2-4b9c-bef3-cdce5f215b9f" path="/var/lib/kubelet/pods/9bd97563-dcf2-4b9c-bef3-cdce5f215b9f/volumes" Sep 4 23:49:01.414619 sshd[5047]: Accepted publickey for core from 10.200.16.10 port 59258 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:49:01.416324 sshd-session[5047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:49:01.422327 systemd-logind[1697]: New session 25 of user core. Sep 4 23:49:01.427391 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 23:49:01.430812 containerd[1714]: time="2025-09-04T23:49:01.430545968Z" level=info msg="StopPodSandbox for \"5c8c0c3804de8bff1775e8497bc5a61916431e795b2f4ea6ddf8277b48c458a3\"" Sep 4 23:49:01.430812 containerd[1714]: time="2025-09-04T23:49:01.430633208Z" level=info msg="TearDown network for sandbox \"5c8c0c3804de8bff1775e8497bc5a61916431e795b2f4ea6ddf8277b48c458a3\" successfully" Sep 4 23:49:01.430812 containerd[1714]: time="2025-09-04T23:49:01.430642768Z" level=info msg="StopPodSandbox for \"5c8c0c3804de8bff1775e8497bc5a61916431e795b2f4ea6ddf8277b48c458a3\" returns successfully" Sep 4 23:49:01.432370 containerd[1714]: time="2025-09-04T23:49:01.431866288Z" level=info msg="RemovePodSandbox for \"5c8c0c3804de8bff1775e8497bc5a61916431e795b2f4ea6ddf8277b48c458a3\"" Sep 4 23:49:01.432370 containerd[1714]: time="2025-09-04T23:49:01.431913848Z" level=info msg="Forcibly stopping sandbox \"5c8c0c3804de8bff1775e8497bc5a61916431e795b2f4ea6ddf8277b48c458a3\"" Sep 4 23:49:01.432370 containerd[1714]: time="2025-09-04T23:49:01.431982088Z" level=info msg="TearDown network for sandbox \"5c8c0c3804de8bff1775e8497bc5a61916431e795b2f4ea6ddf8277b48c458a3\" successfully" Sep 4 23:49:01.440475 containerd[1714]: time="2025-09-04T23:49:01.440384724Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5c8c0c3804de8bff1775e8497bc5a61916431e795b2f4ea6ddf8277b48c458a3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:49:01.440475 containerd[1714]: time="2025-09-04T23:49:01.440476084Z" level=info msg="RemovePodSandbox \"5c8c0c3804de8bff1775e8497bc5a61916431e795b2f4ea6ddf8277b48c458a3\" returns successfully" Sep 4 23:49:01.441101 containerd[1714]: time="2025-09-04T23:49:01.441060884Z" level=info msg="StopPodSandbox for \"e3fa3d36d42f959bd22ce4d9a7012f955252203e9a3d7665b792c0993dd54464\"" Sep 4 23:49:01.441159 containerd[1714]: time="2025-09-04T23:49:01.441139484Z" level=info msg="TearDown network for sandbox \"e3fa3d36d42f959bd22ce4d9a7012f955252203e9a3d7665b792c0993dd54464\" successfully" Sep 4 23:49:01.441159 containerd[1714]: time="2025-09-04T23:49:01.441149684Z" level=info msg="StopPodSandbox for \"e3fa3d36d42f959bd22ce4d9a7012f955252203e9a3d7665b792c0993dd54464\" returns successfully" Sep 4 23:49:01.441756 containerd[1714]: time="2025-09-04T23:49:01.441730084Z" level=info msg="RemovePodSandbox for \"e3fa3d36d42f959bd22ce4d9a7012f955252203e9a3d7665b792c0993dd54464\"" Sep 4 23:49:01.441756 containerd[1714]: time="2025-09-04T23:49:01.441757484Z" level=info msg="Forcibly stopping sandbox \"e3fa3d36d42f959bd22ce4d9a7012f955252203e9a3d7665b792c0993dd54464\"" Sep 4 23:49:01.441843 containerd[1714]: time="2025-09-04T23:49:01.441801484Z" level=info msg="TearDown network for sandbox \"e3fa3d36d42f959bd22ce4d9a7012f955252203e9a3d7665b792c0993dd54464\" successfully" Sep 4 23:49:01.452397 containerd[1714]: time="2025-09-04T23:49:01.452317959Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e3fa3d36d42f959bd22ce4d9a7012f955252203e9a3d7665b792c0993dd54464\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:49:01.452397 containerd[1714]: time="2025-09-04T23:49:01.452385399Z" level=info msg="RemovePodSandbox \"e3fa3d36d42f959bd22ce4d9a7012f955252203e9a3d7665b792c0993dd54464\" returns successfully" Sep 4 23:49:01.511123 kubelet[3284]: E0904 23:49:01.511073 3284 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 23:49:02.669425 systemd[1]: Created slice kubepods-burstable-pod9a3bdce4_ca37_4871_8a73_2da63508c7ee.slice - libcontainer container kubepods-burstable-pod9a3bdce4_ca37_4871_8a73_2da63508c7ee.slice. Sep 4 23:49:02.702088 sshd[5051]: Connection closed by 10.200.16.10 port 59258 Sep 4 23:49:02.702678 sshd-session[5047]: pam_unix(sshd:session): session closed for user core Sep 4 23:49:02.709525 systemd-logind[1697]: Session 25 logged out. Waiting for processes to exit. Sep 4 23:49:02.710270 systemd[1]: sshd@22-10.200.20.4:22-10.200.16.10:59258.service: Deactivated successfully. Sep 4 23:49:02.715003 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 23:49:02.717456 systemd-logind[1697]: Removed session 25. Sep 4 23:49:02.767698 kubelet[3284]: I0904 23:49:02.767358 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a3bdce4-ca37-4871-8a73-2da63508c7ee-cilium-config-path\") pod \"cilium-z6hpg\" (UID: \"9a3bdce4-ca37-4871-8a73-2da63508c7ee\") " pod="kube-system/cilium-z6hpg" Sep 4 23:49:02.767698 kubelet[3284]: I0904 23:49:02.767432 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9a3bdce4-ca37-4871-8a73-2da63508c7ee-host-proc-sys-kernel\") pod \"cilium-z6hpg\" (UID: \"9a3bdce4-ca37-4871-8a73-2da63508c7ee\") " pod="kube-system/cilium-z6hpg" Sep 4 23:49:02.767698 kubelet[3284]: I0904 23:49:02.767471 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9a3bdce4-ca37-4871-8a73-2da63508c7ee-cilium-ipsec-secrets\") pod \"cilium-z6hpg\" (UID: \"9a3bdce4-ca37-4871-8a73-2da63508c7ee\") " pod="kube-system/cilium-z6hpg" Sep 4 23:49:02.767698 kubelet[3284]: I0904 23:49:02.767495 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a3bdce4-ca37-4871-8a73-2da63508c7ee-etc-cni-netd\") pod \"cilium-z6hpg\" (UID: \"9a3bdce4-ca37-4871-8a73-2da63508c7ee\") " pod="kube-system/cilium-z6hpg" Sep 4 23:49:02.767698 kubelet[3284]: I0904 23:49:02.767514 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9a3bdce4-ca37-4871-8a73-2da63508c7ee-host-proc-sys-net\") pod \"cilium-z6hpg\" (UID: \"9a3bdce4-ca37-4871-8a73-2da63508c7ee\") " pod="kube-system/cilium-z6hpg" Sep 4 23:49:02.768162 kubelet[3284]: I0904 23:49:02.767529 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9a3bdce4-ca37-4871-8a73-2da63508c7ee-cilium-cgroup\") pod \"cilium-z6hpg\" (UID: \"9a3bdce4-ca37-4871-8a73-2da63508c7ee\") " pod="kube-system/cilium-z6hpg" Sep 4 23:49:02.768162 kubelet[3284]: I0904 23:49:02.767546 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a3bdce4-ca37-4871-8a73-2da63508c7ee-xtables-lock\") pod \"cilium-z6hpg\" (UID: \"9a3bdce4-ca37-4871-8a73-2da63508c7ee\") " pod="kube-system/cilium-z6hpg" Sep 4 23:49:02.768162 kubelet[3284]: I0904 23:49:02.767570 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9a3bdce4-ca37-4871-8a73-2da63508c7ee-clustermesh-secrets\") pod \"cilium-z6hpg\" (UID: \"9a3bdce4-ca37-4871-8a73-2da63508c7ee\") " pod="kube-system/cilium-z6hpg" Sep 4 23:49:02.768162 kubelet[3284]: I0904 23:49:02.767595 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9a3bdce4-ca37-4871-8a73-2da63508c7ee-bpf-maps\") pod \"cilium-z6hpg\" (UID: \"9a3bdce4-ca37-4871-8a73-2da63508c7ee\") " pod="kube-system/cilium-z6hpg" Sep 4 23:49:02.768162 kubelet[3284]: I0904 23:49:02.767613 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9a3bdce4-ca37-4871-8a73-2da63508c7ee-cni-path\") pod \"cilium-z6hpg\" (UID: \"9a3bdce4-ca37-4871-8a73-2da63508c7ee\") " pod="kube-system/cilium-z6hpg" Sep 4 23:49:02.768162 kubelet[3284]: I0904 23:49:02.767627 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a3bdce4-ca37-4871-8a73-2da63508c7ee-lib-modules\") pod \"cilium-z6hpg\" (UID: \"9a3bdce4-ca37-4871-8a73-2da63508c7ee\") " pod="kube-system/cilium-z6hpg" Sep 4 23:49:02.768323 kubelet[3284]: I0904 23:49:02.767641 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qc98\" (UniqueName: \"kubernetes.io/projected/9a3bdce4-ca37-4871-8a73-2da63508c7ee-kube-api-access-8qc98\") pod \"cilium-z6hpg\" (UID: \"9a3bdce4-ca37-4871-8a73-2da63508c7ee\") " pod="kube-system/cilium-z6hpg" Sep 4 23:49:02.768323 kubelet[3284]: I0904 23:49:02.767657 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9a3bdce4-ca37-4871-8a73-2da63508c7ee-cilium-run\") pod \"cilium-z6hpg\" (UID: \"9a3bdce4-ca37-4871-8a73-2da63508c7ee\") " pod="kube-system/cilium-z6hpg" Sep 4 23:49:02.768323 kubelet[3284]: I0904 23:49:02.767672 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9a3bdce4-ca37-4871-8a73-2da63508c7ee-hostproc\") pod \"cilium-z6hpg\" (UID: \"9a3bdce4-ca37-4871-8a73-2da63508c7ee\") " pod="kube-system/cilium-z6hpg" Sep 4 23:49:02.768323 kubelet[3284]: I0904 23:49:02.767702 3284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9a3bdce4-ca37-4871-8a73-2da63508c7ee-hubble-tls\") pod \"cilium-z6hpg\" (UID: \"9a3bdce4-ca37-4871-8a73-2da63508c7ee\") " pod="kube-system/cilium-z6hpg" Sep 4 23:49:02.795539 systemd[1]: Started sshd@23-10.200.20.4:22-10.200.16.10:59262.service - OpenSSH per-connection server daemon (10.200.16.10:59262). Sep 4 23:49:02.973865 containerd[1714]: time="2025-09-04T23:49:02.973821110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z6hpg,Uid:9a3bdce4-ca37-4871-8a73-2da63508c7ee,Namespace:kube-system,Attempt:0,}" Sep 4 23:49:03.017159 containerd[1714]: time="2025-09-04T23:49:03.016346892Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:49:03.017159 containerd[1714]: time="2025-09-04T23:49:03.016412132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:49:03.017159 containerd[1714]: time="2025-09-04T23:49:03.016427652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:49:03.017159 containerd[1714]: time="2025-09-04T23:49:03.016511932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:49:03.034404 systemd[1]: Started cri-containerd-409236ff5bd630e09d13dc50d625252130445a5aa2e15598ef0c1b2a9cbf5bda.scope - libcontainer container 409236ff5bd630e09d13dc50d625252130445a5aa2e15598ef0c1b2a9cbf5bda. Sep 4 23:49:03.060308 containerd[1714]: time="2025-09-04T23:49:03.060267353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z6hpg,Uid:9a3bdce4-ca37-4871-8a73-2da63508c7ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"409236ff5bd630e09d13dc50d625252130445a5aa2e15598ef0c1b2a9cbf5bda\"" Sep 4 23:49:03.071683 containerd[1714]: time="2025-09-04T23:49:03.071532988Z" level=info msg="CreateContainer within sandbox \"409236ff5bd630e09d13dc50d625252130445a5aa2e15598ef0c1b2a9cbf5bda\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 23:49:03.101976 containerd[1714]: time="2025-09-04T23:49:03.101926255Z" level=info msg="CreateContainer within sandbox \"409236ff5bd630e09d13dc50d625252130445a5aa2e15598ef0c1b2a9cbf5bda\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f217bf11280211cef1d3f476fac80ae9f89b92d7756caf22da57da9033bf7307\"" Sep 4 23:49:03.103203 containerd[1714]: time="2025-09-04T23:49:03.102972135Z" level=info msg="StartContainer for \"f217bf11280211cef1d3f476fac80ae9f89b92d7756caf22da57da9033bf7307\"" Sep 4 23:49:03.126386 systemd[1]: Started cri-containerd-f217bf11280211cef1d3f476fac80ae9f89b92d7756caf22da57da9033bf7307.scope - libcontainer container f217bf11280211cef1d3f476fac80ae9f89b92d7756caf22da57da9033bf7307. Sep 4 23:49:03.161176 systemd[1]: cri-containerd-f217bf11280211cef1d3f476fac80ae9f89b92d7756caf22da57da9033bf7307.scope: Deactivated successfully. Sep 4 23:49:03.163003 containerd[1714]: time="2025-09-04T23:49:03.162946229Z" level=info msg="StartContainer for \"f217bf11280211cef1d3f476fac80ae9f89b92d7756caf22da57da9033bf7307\" returns successfully" Sep 4 23:49:03.241153 containerd[1714]: time="2025-09-04T23:49:03.241002276Z" level=info msg="shim disconnected" id=f217bf11280211cef1d3f476fac80ae9f89b92d7756caf22da57da9033bf7307 namespace=k8s.io Sep 4 23:49:03.241153 containerd[1714]: time="2025-09-04T23:49:03.241061876Z" level=warning msg="cleaning up after shim disconnected" id=f217bf11280211cef1d3f476fac80ae9f89b92d7756caf22da57da9033bf7307 namespace=k8s.io Sep 4 23:49:03.241153 containerd[1714]: time="2025-09-04T23:49:03.241070996Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:49:03.293560 sshd[5062]: Accepted publickey for core from 10.200.16.10 port 59262 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:49:03.294899 sshd-session[5062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:49:03.301724 systemd-logind[1697]: New session 26 of user core. Sep 4 23:49:03.308386 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 23:49:03.644721 sshd[5172]: Connection closed by 10.200.16.10 port 59262 Sep 4 23:49:03.645345 sshd-session[5062]: pam_unix(sshd:session): session closed for user core Sep 4 23:49:03.649128 systemd-logind[1697]: Session 26 logged out. Waiting for processes to exit. Sep 4 23:49:03.649349 systemd[1]: sshd@23-10.200.20.4:22-10.200.16.10:59262.service: Deactivated successfully. Sep 4 23:49:03.651876 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 23:49:03.652971 systemd-logind[1697]: Removed session 26. Sep 4 23:49:03.736455 systemd[1]: Started sshd@24-10.200.20.4:22-10.200.16.10:59272.service - OpenSSH per-connection server daemon (10.200.16.10:59272). Sep 4 23:49:03.831706 containerd[1714]: time="2025-09-04T23:49:03.831500384Z" level=info msg="CreateContainer within sandbox \"409236ff5bd630e09d13dc50d625252130445a5aa2e15598ef0c1b2a9cbf5bda\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 23:49:03.862068 containerd[1714]: time="2025-09-04T23:49:03.862013691Z" level=info msg="CreateContainer within sandbox \"409236ff5bd630e09d13dc50d625252130445a5aa2e15598ef0c1b2a9cbf5bda\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8f50d1656b6b9acfafffdf18b731117df47eedde805f32b5fa037f533f35c920\"" Sep 4 23:49:03.862905 containerd[1714]: time="2025-09-04T23:49:03.862878170Z" level=info msg="StartContainer for \"8f50d1656b6b9acfafffdf18b731117df47eedde805f32b5fa037f533f35c920\"" Sep 4 23:49:03.903417 systemd[1]: Started cri-containerd-8f50d1656b6b9acfafffdf18b731117df47eedde805f32b5fa037f533f35c920.scope - libcontainer container 8f50d1656b6b9acfafffdf18b731117df47eedde805f32b5fa037f533f35c920. Sep 4 23:49:03.932684 containerd[1714]: time="2025-09-04T23:49:03.932628061Z" level=info msg="StartContainer for \"8f50d1656b6b9acfafffdf18b731117df47eedde805f32b5fa037f533f35c920\" returns successfully" Sep 4 23:49:03.936778 systemd[1]: cri-containerd-8f50d1656b6b9acfafffdf18b731117df47eedde805f32b5fa037f533f35c920.scope: Deactivated successfully. Sep 4 23:49:03.955789 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f50d1656b6b9acfafffdf18b731117df47eedde805f32b5fa037f533f35c920-rootfs.mount: Deactivated successfully. Sep 4 23:49:03.971155 containerd[1714]: time="2025-09-04T23:49:03.971024004Z" level=info msg="shim disconnected" id=8f50d1656b6b9acfafffdf18b731117df47eedde805f32b5fa037f533f35c920 namespace=k8s.io Sep 4 23:49:03.971155 containerd[1714]: time="2025-09-04T23:49:03.971127724Z" level=warning msg="cleaning up after shim disconnected" id=8f50d1656b6b9acfafffdf18b731117df47eedde805f32b5fa037f533f35c920 namespace=k8s.io Sep 4 23:49:03.971155 containerd[1714]: time="2025-09-04T23:49:03.971139244Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:49:04.195948 sshd[5179]: Accepted publickey for core from 10.200.16.10 port 59272 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:49:04.197537 sshd-session[5179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:49:04.201493 systemd-logind[1697]: New session 27 of user core. Sep 4 23:49:04.206349 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 23:49:04.836531 containerd[1714]: time="2025-09-04T23:49:04.836476875Z" level=info msg="CreateContainer within sandbox \"409236ff5bd630e09d13dc50d625252130445a5aa2e15598ef0c1b2a9cbf5bda\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 23:49:04.872962 containerd[1714]: time="2025-09-04T23:49:04.872859380Z" level=info msg="CreateContainer within sandbox \"409236ff5bd630e09d13dc50d625252130445a5aa2e15598ef0c1b2a9cbf5bda\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"60b4523500ca8cb7dd97a9c49ec569afbd5a9361bbb8f9b2f03a0358548aa6d1\"" Sep 4 23:49:04.876213 containerd[1714]: time="2025-09-04T23:49:04.875057139Z" level=info msg="StartContainer for \"60b4523500ca8cb7dd97a9c49ec569afbd5a9361bbb8f9b2f03a0358548aa6d1\"" Sep 4 23:49:04.909439 systemd[1]: Started cri-containerd-60b4523500ca8cb7dd97a9c49ec569afbd5a9361bbb8f9b2f03a0358548aa6d1.scope - libcontainer container 60b4523500ca8cb7dd97a9c49ec569afbd5a9361bbb8f9b2f03a0358548aa6d1. Sep 4 23:49:04.945804 systemd[1]: cri-containerd-60b4523500ca8cb7dd97a9c49ec569afbd5a9361bbb8f9b2f03a0358548aa6d1.scope: Deactivated successfully. Sep 4 23:49:04.947842 containerd[1714]: time="2025-09-04T23:49:04.947563388Z" level=info msg="StartContainer for \"60b4523500ca8cb7dd97a9c49ec569afbd5a9361bbb8f9b2f03a0358548aa6d1\" returns successfully" Sep 4 23:49:04.971160 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-60b4523500ca8cb7dd97a9c49ec569afbd5a9361bbb8f9b2f03a0358548aa6d1-rootfs.mount: Deactivated successfully. Sep 4 23:49:04.986204 containerd[1714]: time="2025-09-04T23:49:04.985971531Z" level=info msg="shim disconnected" id=60b4523500ca8cb7dd97a9c49ec569afbd5a9361bbb8f9b2f03a0358548aa6d1 namespace=k8s.io Sep 4 23:49:04.986204 containerd[1714]: time="2025-09-04T23:49:04.986025971Z" level=warning msg="cleaning up after shim disconnected" id=60b4523500ca8cb7dd97a9c49ec569afbd5a9361bbb8f9b2f03a0358548aa6d1 namespace=k8s.io Sep 4 23:49:04.986204 containerd[1714]: time="2025-09-04T23:49:04.986035331Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:49:05.146022 kubelet[3284]: I0904 23:49:05.145896 3284 setters.go:618] "Node became not ready" node="ci-4230.2.2-n-a8c1fd94a3" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-04T23:49:05Z","lastTransitionTime":"2025-09-04T23:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 4 23:49:05.848965 containerd[1714]: time="2025-09-04T23:49:05.848920163Z" level=info msg="CreateContainer within sandbox \"409236ff5bd630e09d13dc50d625252130445a5aa2e15598ef0c1b2a9cbf5bda\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 23:49:05.899855 containerd[1714]: time="2025-09-04T23:49:05.899808861Z" level=info msg="CreateContainer within sandbox \"409236ff5bd630e09d13dc50d625252130445a5aa2e15598ef0c1b2a9cbf5bda\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cc2ba4ccb9df9e61b24cb5714d9c74453eee2d863c88a5652ea003e5e5e8dc36\"" Sep 4 23:49:05.901426 containerd[1714]: time="2025-09-04T23:49:05.901382301Z" level=info msg="StartContainer for \"cc2ba4ccb9df9e61b24cb5714d9c74453eee2d863c88a5652ea003e5e5e8dc36\"" Sep 4 23:49:05.928281 systemd[1]: run-containerd-runc-k8s.io-cc2ba4ccb9df9e61b24cb5714d9c74453eee2d863c88a5652ea003e5e5e8dc36-runc.9rLbiK.mount: Deactivated successfully. Sep 4 23:49:05.937398 systemd[1]: Started cri-containerd-cc2ba4ccb9df9e61b24cb5714d9c74453eee2d863c88a5652ea003e5e5e8dc36.scope - libcontainer container cc2ba4ccb9df9e61b24cb5714d9c74453eee2d863c88a5652ea003e5e5e8dc36. Sep 4 23:49:05.960946 systemd[1]: cri-containerd-cc2ba4ccb9df9e61b24cb5714d9c74453eee2d863c88a5652ea003e5e5e8dc36.scope: Deactivated successfully. Sep 4 23:49:05.966877 containerd[1714]: time="2025-09-04T23:49:05.966700233Z" level=info msg="StartContainer for \"cc2ba4ccb9df9e61b24cb5714d9c74453eee2d863c88a5652ea003e5e5e8dc36\" returns successfully" Sep 4 23:49:05.988466 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc2ba4ccb9df9e61b24cb5714d9c74453eee2d863c88a5652ea003e5e5e8dc36-rootfs.mount: Deactivated successfully. Sep 4 23:49:06.005705 containerd[1714]: time="2025-09-04T23:49:06.005636976Z" level=info msg="shim disconnected" id=cc2ba4ccb9df9e61b24cb5714d9c74453eee2d863c88a5652ea003e5e5e8dc36 namespace=k8s.io Sep 4 23:49:06.005705 containerd[1714]: time="2025-09-04T23:49:06.005694296Z" level=warning msg="cleaning up after shim disconnected" id=cc2ba4ccb9df9e61b24cb5714d9c74453eee2d863c88a5652ea003e5e5e8dc36 namespace=k8s.io Sep 4 23:49:06.005705 containerd[1714]: time="2025-09-04T23:49:06.005703856Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:49:06.512878 kubelet[3284]: E0904 23:49:06.512820 3284 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 23:49:06.570222 update_engine[1699]: I20250904 23:49:06.569784 1699 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 23:49:06.570222 update_engine[1699]: I20250904 23:49:06.570029 1699 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 23:49:06.570630 update_engine[1699]: I20250904 23:49:06.570364 1699 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 23:49:06.694216 update_engine[1699]: E20250904 23:49:06.694135 1699 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 23:49:06.694395 update_engine[1699]: I20250904 23:49:06.694242 1699 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 4 23:49:06.694395 update_engine[1699]: I20250904 23:49:06.694253 1699 omaha_request_action.cc:617] Omaha request response: Sep 4 23:49:06.694395 update_engine[1699]: E20250904 23:49:06.694332 1699 omaha_request_action.cc:636] Omaha request network transfer failed. Sep 4 23:49:06.694395 update_engine[1699]: I20250904 23:49:06.694351 1699 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 4 23:49:06.694395 update_engine[1699]: I20250904 23:49:06.694356 1699 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 4 23:49:06.694395 update_engine[1699]: I20250904 23:49:06.694361 1699 update_attempter.cc:306] Processing Done. Sep 4 23:49:06.694395 update_engine[1699]: E20250904 23:49:06.694375 1699 update_attempter.cc:619] Update failed. Sep 4 23:49:06.694395 update_engine[1699]: I20250904 23:49:06.694382 1699 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 4 23:49:06.694395 update_engine[1699]: I20250904 23:49:06.694386 1699 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 4 23:49:06.694395 update_engine[1699]: I20250904 23:49:06.694393 1699 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 4 23:49:06.694601 update_engine[1699]: I20250904 23:49:06.694473 1699 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 4 23:49:06.694601 update_engine[1699]: I20250904 23:49:06.694495 1699 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 4 23:49:06.694601 update_engine[1699]: I20250904 23:49:06.694500 1699 omaha_request_action.cc:272] Request: Sep 4 23:49:06.694601 update_engine[1699]: Sep 4 23:49:06.694601 update_engine[1699]: Sep 4 23:49:06.694601 update_engine[1699]: Sep 4 23:49:06.694601 update_engine[1699]: Sep 4 23:49:06.694601 update_engine[1699]: Sep 4 23:49:06.694601 update_engine[1699]: Sep 4 23:49:06.694601 update_engine[1699]: I20250904 23:49:06.694505 1699 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 23:49:06.694776 update_engine[1699]: I20250904 23:49:06.694643 1699 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 23:49:06.695059 update_engine[1699]: I20250904 23:49:06.694856 1699 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 23:49:06.695113 locksmithd[1803]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 4 23:49:06.741281 update_engine[1699]: E20250904 23:49:06.741230 1699 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 23:49:06.741378 update_engine[1699]: I20250904 23:49:06.741348 1699 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 4 23:49:06.741378 update_engine[1699]: I20250904 23:49:06.741356 1699 omaha_request_action.cc:617] Omaha request response: Sep 4 23:49:06.741378 update_engine[1699]: I20250904 23:49:06.741364 1699 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 4 23:49:06.741378 update_engine[1699]: I20250904 23:49:06.741367 1699 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 4 23:49:06.741378 update_engine[1699]: I20250904 23:49:06.741372 1699 update_attempter.cc:306] Processing Done. Sep 4 23:49:06.741629 update_engine[1699]: I20250904 23:49:06.741379 1699 update_attempter.cc:310] Error event sent. Sep 4 23:49:06.741629 update_engine[1699]: I20250904 23:49:06.741389 1699 update_check_scheduler.cc:74] Next update check in 45m27s Sep 4 23:49:06.741825 locksmithd[1803]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 4 23:49:06.843846 containerd[1714]: time="2025-09-04T23:49:06.843740739Z" level=info msg="CreateContainer within sandbox \"409236ff5bd630e09d13dc50d625252130445a5aa2e15598ef0c1b2a9cbf5bda\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 23:49:06.874787 containerd[1714]: time="2025-09-04T23:49:06.874739045Z" level=info msg="CreateContainer within sandbox \"409236ff5bd630e09d13dc50d625252130445a5aa2e15598ef0c1b2a9cbf5bda\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6624c19a16dc651ba30470543934e0cfc81b9d7145e916edced5c81067b34963\"" Sep 4 23:49:06.875973 containerd[1714]: time="2025-09-04T23:49:06.875375245Z" level=info msg="StartContainer for \"6624c19a16dc651ba30470543934e0cfc81b9d7145e916edced5c81067b34963\"" Sep 4 23:49:06.901388 systemd[1]: Started cri-containerd-6624c19a16dc651ba30470543934e0cfc81b9d7145e916edced5c81067b34963.scope - libcontainer container 6624c19a16dc651ba30470543934e0cfc81b9d7145e916edced5c81067b34963. Sep 4 23:49:06.912147 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2491908776.mount: Deactivated successfully. Sep 4 23:49:06.935408 containerd[1714]: time="2025-09-04T23:49:06.935354340Z" level=info msg="StartContainer for \"6624c19a16dc651ba30470543934e0cfc81b9d7145e916edced5c81067b34963\" returns successfully" Sep 4 23:49:07.475268 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 4 23:49:07.857979 kubelet[3284]: I0904 23:49:07.857826 3284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-z6hpg" podStartSLOduration=5.857810026 podStartE2EDuration="5.857810026s" podCreationTimestamp="2025-09-04 23:49:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:49:07.856868546 +0000 UTC m=+190.095039767" watchObservedRunningTime="2025-09-04 23:49:07.857810026 +0000 UTC m=+190.095981207" Sep 4 23:49:10.188077 systemd-networkd[1609]: lxc_health: Link UP Sep 4 23:49:10.200142 systemd-networkd[1609]: lxc_health: Gained carrier Sep 4 23:49:11.543369 systemd-networkd[1609]: lxc_health: Gained IPv6LL Sep 4 23:49:13.017329 systemd[1]: run-containerd-runc-k8s.io-6624c19a16dc651ba30470543934e0cfc81b9d7145e916edced5c81067b34963-runc.Rf7XDO.mount: Deactivated successfully. Sep 4 23:49:17.306681 systemd[1]: run-containerd-runc-k8s.io-6624c19a16dc651ba30470543934e0cfc81b9d7145e916edced5c81067b34963-runc.dr0aJu.mount: Deactivated successfully. Sep 4 23:49:17.443017 sshd[5241]: Connection closed by 10.200.16.10 port 59272 Sep 4 23:49:17.443687 sshd-session[5179]: pam_unix(sshd:session): session closed for user core Sep 4 23:49:17.448928 systemd-logind[1697]: Session 27 logged out. Waiting for processes to exit. Sep 4 23:49:17.449268 systemd[1]: sshd@24-10.200.20.4:22-10.200.16.10:59272.service: Deactivated successfully. Sep 4 23:49:17.452648 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 23:49:17.457540 systemd-logind[1697]: Removed session 27.