Oct 30 23:54:24.296908 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 30 23:54:24.296929 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Thu Oct 30 22:19:25 -00 2025 Oct 30 23:54:24.296937 kernel: KASLR enabled Oct 30 23:54:24.296943 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Oct 30 23:54:24.296950 kernel: printk: bootconsole [pl11] enabled Oct 30 23:54:24.296955 kernel: efi: EFI v2.7 by EDK II Oct 30 23:54:24.296962 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3ead5018 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Oct 30 23:54:24.296968 kernel: random: crng init done Oct 30 23:54:24.296974 kernel: secureboot: Secure boot disabled Oct 30 23:54:24.296980 kernel: ACPI: Early table checksum verification disabled Oct 30 23:54:24.296985 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Oct 30 23:54:24.296991 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 30 23:54:24.296997 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 30 23:54:24.297004 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Oct 30 23:54:24.297011 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 30 23:54:24.297017 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 30 23:54:24.297024 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 30 23:54:24.297031 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 30 23:54:24.297037 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 30 23:54:24.297044 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 30 23:54:24.297050 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Oct 30 23:54:24.297056 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 30 23:54:24.297062 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Oct 30 23:54:24.297068 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Oct 30 23:54:24.297074 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Oct 30 23:54:24.297081 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Oct 30 23:54:24.297087 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Oct 30 23:54:24.297093 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Oct 30 23:54:24.297101 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Oct 30 23:54:24.297107 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Oct 30 23:54:24.297113 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Oct 30 23:54:24.297119 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Oct 30 23:54:24.297125 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Oct 30 23:54:24.297131 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Oct 30 23:54:24.297137 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Oct 30 23:54:24.297143 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Oct 30 23:54:24.297149 kernel: Zone ranges: Oct 30 23:54:24.297155 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Oct 30 23:54:24.297161 kernel: DMA32 empty Oct 30 23:54:24.297168 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Oct 30 23:54:24.297178 kernel: Movable zone start for each node Oct 30 23:54:24.297184 kernel: Early memory node ranges Oct 30 23:54:24.297191 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Oct 30 23:54:24.297197 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Oct 30 23:54:24.297204 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Oct 30 23:54:24.297212 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Oct 30 23:54:24.297218 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Oct 30 23:54:24.297224 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Oct 30 23:54:24.297231 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Oct 30 23:54:24.297237 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Oct 30 23:54:24.297244 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Oct 30 23:54:24.297250 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Oct 30 23:54:24.297257 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Oct 30 23:54:24.297263 kernel: psci: probing for conduit method from ACPI. Oct 30 23:54:24.297269 kernel: psci: PSCIv1.1 detected in firmware. Oct 30 23:54:24.297276 kernel: psci: Using standard PSCI v0.2 function IDs Oct 30 23:54:24.297282 kernel: psci: MIGRATE_INFO_TYPE not supported. Oct 30 23:54:24.297290 kernel: psci: SMC Calling Convention v1.4 Oct 30 23:54:24.297296 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Oct 30 23:54:24.297303 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Oct 30 23:54:24.297309 kernel: percpu: Embedded 31 pages/cpu s86120 r8192 d32664 u126976 Oct 30 23:54:24.297316 kernel: pcpu-alloc: s86120 r8192 d32664 u126976 alloc=31*4096 Oct 30 23:54:24.297322 kernel: pcpu-alloc: [0] 0 [0] 1 Oct 30 23:54:24.297329 kernel: Detected PIPT I-cache on CPU0 Oct 30 23:54:24.297335 kernel: CPU features: detected: GIC system register CPU interface Oct 30 23:54:24.297342 kernel: CPU features: detected: Hardware dirty bit management Oct 30 23:54:24.297348 kernel: CPU features: detected: Spectre-BHB Oct 30 23:54:24.297354 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 30 23:54:24.297363 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 30 23:54:24.297369 kernel: CPU features: detected: ARM erratum 1418040 Oct 30 23:54:24.297376 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Oct 30 23:54:24.297382 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 30 23:54:24.297389 kernel: alternatives: applying boot alternatives Oct 30 23:54:24.297396 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=fa720f16dbb9986f34dd4402492c226087bd8d749299bbe02bbfafab6272d378 Oct 30 23:54:24.297403 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 30 23:54:24.297410 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 30 23:54:24.297416 kernel: Fallback order for Node 0: 0 Oct 30 23:54:24.297423 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Oct 30 23:54:24.297429 kernel: Policy zone: Normal Oct 30 23:54:24.297437 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 30 23:54:24.297443 kernel: software IO TLB: area num 2. Oct 30 23:54:24.297450 kernel: software IO TLB: mapped [mem 0x0000000036530000-0x000000003a530000] (64MB) Oct 30 23:54:24.297457 kernel: Memory: 3983528K/4194160K available (10368K kernel code, 2180K rwdata, 8104K rodata, 38400K init, 897K bss, 210632K reserved, 0K cma-reserved) Oct 30 23:54:24.297463 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 30 23:54:24.297469 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 30 23:54:24.297476 kernel: rcu: RCU event tracing is enabled. Oct 30 23:54:24.297483 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 30 23:54:24.297490 kernel: Trampoline variant of Tasks RCU enabled. Oct 30 23:54:24.297496 kernel: Tracing variant of Tasks RCU enabled. Oct 30 23:54:24.297503 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 30 23:54:24.297511 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 30 23:54:24.297517 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 30 23:54:24.297524 kernel: GICv3: 960 SPIs implemented Oct 30 23:54:24.297530 kernel: GICv3: 0 Extended SPIs implemented Oct 30 23:54:24.297537 kernel: Root IRQ handler: gic_handle_irq Oct 30 23:54:24.297543 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 30 23:54:24.297549 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Oct 30 23:54:24.297556 kernel: ITS: No ITS available, not enabling LPIs Oct 30 23:54:24.297563 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 30 23:54:24.297569 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 30 23:54:24.297576 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 30 23:54:24.297582 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 30 23:54:24.297591 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 30 23:54:24.297597 kernel: Console: colour dummy device 80x25 Oct 30 23:54:24.297604 kernel: printk: console [tty1] enabled Oct 30 23:54:24.297611 kernel: ACPI: Core revision 20230628 Oct 30 23:54:24.297618 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 30 23:54:24.297624 kernel: pid_max: default: 32768 minimum: 301 Oct 30 23:54:24.297631 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 30 23:54:24.297638 kernel: landlock: Up and running. Oct 30 23:54:24.297644 kernel: SELinux: Initializing. Oct 30 23:54:24.297661 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 30 23:54:24.297668 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 30 23:54:24.297675 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 30 23:54:24.297682 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 30 23:54:24.297689 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Oct 30 23:54:24.297696 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Oct 30 23:54:24.297702 kernel: Hyper-V: enabling crash_kexec_post_notifiers Oct 30 23:54:24.297716 kernel: rcu: Hierarchical SRCU implementation. Oct 30 23:54:24.297723 kernel: rcu: Max phase no-delay instances is 400. Oct 30 23:54:24.297730 kernel: Remapping and enabling EFI services. Oct 30 23:54:24.297737 kernel: smp: Bringing up secondary CPUs ... Oct 30 23:54:24.297744 kernel: Detected PIPT I-cache on CPU1 Oct 30 23:54:24.297752 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Oct 30 23:54:24.297760 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 30 23:54:24.297767 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 30 23:54:24.297774 kernel: smp: Brought up 1 node, 2 CPUs Oct 30 23:54:24.297780 kernel: SMP: Total of 2 processors activated. Oct 30 23:54:24.297789 kernel: CPU features: detected: 32-bit EL0 Support Oct 30 23:54:24.297796 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Oct 30 23:54:24.297803 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 30 23:54:24.297810 kernel: CPU features: detected: CRC32 instructions Oct 30 23:54:24.297817 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 30 23:54:24.297824 kernel: CPU features: detected: LSE atomic instructions Oct 30 23:54:24.297831 kernel: CPU features: detected: Privileged Access Never Oct 30 23:54:24.297838 kernel: CPU: All CPU(s) started at EL1 Oct 30 23:54:24.297845 kernel: alternatives: applying system-wide alternatives Oct 30 23:54:24.297853 kernel: devtmpfs: initialized Oct 30 23:54:24.297860 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 30 23:54:24.297867 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 30 23:54:24.297874 kernel: pinctrl core: initialized pinctrl subsystem Oct 30 23:54:24.297881 kernel: SMBIOS 3.1.0 present. Oct 30 23:54:24.297888 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Oct 30 23:54:24.297895 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 30 23:54:24.297902 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 30 23:54:24.297909 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 30 23:54:24.297918 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 30 23:54:24.297925 kernel: audit: initializing netlink subsys (disabled) Oct 30 23:54:24.297932 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Oct 30 23:54:24.297939 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 30 23:54:24.297946 kernel: cpuidle: using governor menu Oct 30 23:54:24.297953 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 30 23:54:24.297960 kernel: ASID allocator initialised with 32768 entries Oct 30 23:54:24.297967 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 30 23:54:24.297974 kernel: Serial: AMBA PL011 UART driver Oct 30 23:54:24.297982 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 30 23:54:24.297989 kernel: Modules: 0 pages in range for non-PLT usage Oct 30 23:54:24.297996 kernel: Modules: 509248 pages in range for PLT usage Oct 30 23:54:24.298003 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 30 23:54:24.298010 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 30 23:54:24.298017 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 30 23:54:24.298024 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 30 23:54:24.298031 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 30 23:54:24.298038 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 30 23:54:24.298047 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 30 23:54:24.298054 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 30 23:54:24.298061 kernel: ACPI: Added _OSI(Module Device) Oct 30 23:54:24.298068 kernel: ACPI: Added _OSI(Processor Device) Oct 30 23:54:24.298075 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 30 23:54:24.298082 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 30 23:54:24.298089 kernel: ACPI: Interpreter enabled Oct 30 23:54:24.298096 kernel: ACPI: Using GIC for interrupt routing Oct 30 23:54:24.298103 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Oct 30 23:54:24.298111 kernel: printk: console [ttyAMA0] enabled Oct 30 23:54:24.298118 kernel: printk: bootconsole [pl11] disabled Oct 30 23:54:24.298125 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Oct 30 23:54:24.298132 kernel: iommu: Default domain type: Translated Oct 30 23:54:24.298140 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 30 23:54:24.298147 kernel: efivars: Registered efivars operations Oct 30 23:54:24.298154 kernel: vgaarb: loaded Oct 30 23:54:24.298161 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 30 23:54:24.298168 kernel: VFS: Disk quotas dquot_6.6.0 Oct 30 23:54:24.298176 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 30 23:54:24.298183 kernel: pnp: PnP ACPI init Oct 30 23:54:24.298190 kernel: pnp: PnP ACPI: found 0 devices Oct 30 23:54:24.298197 kernel: NET: Registered PF_INET protocol family Oct 30 23:54:24.298204 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 30 23:54:24.298211 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 30 23:54:24.298218 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 30 23:54:24.298226 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 30 23:54:24.298233 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 30 23:54:24.298241 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 30 23:54:24.298248 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 30 23:54:24.298255 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 30 23:54:24.298262 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 30 23:54:24.298269 kernel: PCI: CLS 0 bytes, default 64 Oct 30 23:54:24.298276 kernel: kvm [1]: HYP mode not available Oct 30 23:54:24.298283 kernel: Initialise system trusted keyrings Oct 30 23:54:24.298290 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 30 23:54:24.298297 kernel: Key type asymmetric registered Oct 30 23:54:24.298306 kernel: Asymmetric key parser 'x509' registered Oct 30 23:54:24.298313 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 30 23:54:24.298320 kernel: io scheduler mq-deadline registered Oct 30 23:54:24.298326 kernel: io scheduler kyber registered Oct 30 23:54:24.298333 kernel: io scheduler bfq registered Oct 30 23:54:24.298340 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 30 23:54:24.298347 kernel: thunder_xcv, ver 1.0 Oct 30 23:54:24.298354 kernel: thunder_bgx, ver 1.0 Oct 30 23:54:24.298361 kernel: nicpf, ver 1.0 Oct 30 23:54:24.298369 kernel: nicvf, ver 1.0 Oct 30 23:54:24.298492 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 30 23:54:24.298562 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-10-30T23:54:23 UTC (1761868463) Oct 30 23:54:24.298572 kernel: efifb: probing for efifb Oct 30 23:54:24.298579 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Oct 30 23:54:24.298586 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Oct 30 23:54:24.298593 kernel: efifb: scrolling: redraw Oct 30 23:54:24.298600 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Oct 30 23:54:24.298609 kernel: Console: switching to colour frame buffer device 128x48 Oct 30 23:54:24.298616 kernel: fb0: EFI VGA frame buffer device Oct 30 23:54:24.298623 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Oct 30 23:54:24.298630 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 30 23:54:24.298637 kernel: No ACPI PMU IRQ for CPU0 Oct 30 23:54:24.298644 kernel: No ACPI PMU IRQ for CPU1 Oct 30 23:54:24.298661 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Oct 30 23:54:24.298669 kernel: watchdog: Delayed init of the lockup detector failed: -19 Oct 30 23:54:24.298676 kernel: watchdog: Hard watchdog permanently disabled Oct 30 23:54:24.298685 kernel: NET: Registered PF_INET6 protocol family Oct 30 23:54:24.298692 kernel: Segment Routing with IPv6 Oct 30 23:54:24.298699 kernel: In-situ OAM (IOAM) with IPv6 Oct 30 23:54:24.298706 kernel: NET: Registered PF_PACKET protocol family Oct 30 23:54:24.298713 kernel: Key type dns_resolver registered Oct 30 23:54:24.298720 kernel: registered taskstats version 1 Oct 30 23:54:24.298727 kernel: Loading compiled-in X.509 certificates Oct 30 23:54:24.298734 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: aa1124814e36842ccda0ba5471ce49eeba345bb7' Oct 30 23:54:24.298741 kernel: Key type .fscrypt registered Oct 30 23:54:24.298749 kernel: Key type fscrypt-provisioning registered Oct 30 23:54:24.298756 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 30 23:54:24.298763 kernel: ima: Allocated hash algorithm: sha1 Oct 30 23:54:24.298770 kernel: ima: No architecture policies found Oct 30 23:54:24.298777 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 30 23:54:24.298784 kernel: clk: Disabling unused clocks Oct 30 23:54:24.298791 kernel: Freeing unused kernel memory: 38400K Oct 30 23:54:24.298798 kernel: Run /init as init process Oct 30 23:54:24.298805 kernel: with arguments: Oct 30 23:54:24.298813 kernel: /init Oct 30 23:54:24.298820 kernel: with environment: Oct 30 23:54:24.298827 kernel: HOME=/ Oct 30 23:54:24.298834 kernel: TERM=linux Oct 30 23:54:24.298842 systemd[1]: Successfully made /usr/ read-only. Oct 30 23:54:24.298852 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 30 23:54:24.298860 systemd[1]: Detected virtualization microsoft. Oct 30 23:54:24.298867 systemd[1]: Detected architecture arm64. Oct 30 23:54:24.298876 systemd[1]: Running in initrd. Oct 30 23:54:24.298883 systemd[1]: No hostname configured, using default hostname. Oct 30 23:54:24.298891 systemd[1]: Hostname set to . Oct 30 23:54:24.298899 systemd[1]: Initializing machine ID from random generator. Oct 30 23:54:24.298906 systemd[1]: Queued start job for default target initrd.target. Oct 30 23:54:24.298914 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 30 23:54:24.298922 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 30 23:54:24.298930 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 30 23:54:24.298939 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 30 23:54:24.298947 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 30 23:54:24.298955 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 30 23:54:24.298963 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 30 23:54:24.298971 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 30 23:54:24.298979 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 30 23:54:24.298988 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 30 23:54:24.298995 systemd[1]: Reached target paths.target - Path Units. Oct 30 23:54:24.299003 systemd[1]: Reached target slices.target - Slice Units. Oct 30 23:54:24.299010 systemd[1]: Reached target swap.target - Swaps. Oct 30 23:54:24.299018 systemd[1]: Reached target timers.target - Timer Units. Oct 30 23:54:24.299025 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 30 23:54:24.299033 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 30 23:54:24.299041 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 30 23:54:24.299048 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 30 23:54:24.299057 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 30 23:54:24.299065 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 30 23:54:24.299072 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 30 23:54:24.299080 systemd[1]: Reached target sockets.target - Socket Units. Oct 30 23:54:24.299088 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 30 23:54:24.299095 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 30 23:54:24.299103 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 30 23:54:24.299110 systemd[1]: Starting systemd-fsck-usr.service... Oct 30 23:54:24.299118 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 30 23:54:24.299127 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 30 23:54:24.299149 systemd-journald[218]: Collecting audit messages is disabled. Oct 30 23:54:24.299167 systemd-journald[218]: Journal started Oct 30 23:54:24.299186 systemd-journald[218]: Runtime Journal (/run/log/journal/d322d3ac4d154211921a77c11f14de30) is 8M, max 78.5M, 70.5M free. Oct 30 23:54:24.311000 systemd-modules-load[220]: Inserted module 'overlay' Oct 30 23:54:24.319980 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 23:54:24.342751 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 30 23:54:24.342798 systemd[1]: Started systemd-journald.service - Journal Service. Oct 30 23:54:24.346357 kernel: Bridge firewalling registered Oct 30 23:54:24.348711 systemd-modules-load[220]: Inserted module 'br_netfilter' Oct 30 23:54:24.352288 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 30 23:54:24.370368 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 30 23:54:24.378029 systemd[1]: Finished systemd-fsck-usr.service. Oct 30 23:54:24.389135 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 30 23:54:24.398999 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 23:54:24.418826 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 30 23:54:24.425776 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 30 23:54:24.450150 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 30 23:54:24.467825 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 30 23:54:24.483675 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 30 23:54:24.499685 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 30 23:54:24.505519 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 30 23:54:24.518209 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 30 23:54:24.546858 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 30 23:54:24.555811 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 30 23:54:24.575067 dracut-cmdline[252]: dracut-dracut-053 Oct 30 23:54:24.584073 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=fa720f16dbb9986f34dd4402492c226087bd8d749299bbe02bbfafab6272d378 Oct 30 23:54:24.618852 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 30 23:54:24.637759 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 30 23:54:24.648905 systemd-resolved[255]: Positive Trust Anchors: Oct 30 23:54:24.648916 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 30 23:54:24.648946 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 30 23:54:24.651055 systemd-resolved[255]: Defaulting to hostname 'linux'. Oct 30 23:54:24.652040 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 30 23:54:24.659341 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 30 23:54:24.777674 kernel: SCSI subsystem initialized Oct 30 23:54:24.784664 kernel: Loading iSCSI transport class v2.0-870. Oct 30 23:54:24.794664 kernel: iscsi: registered transport (tcp) Oct 30 23:54:24.813343 kernel: iscsi: registered transport (qla4xxx) Oct 30 23:54:24.813391 kernel: QLogic iSCSI HBA Driver Oct 30 23:54:24.845978 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 30 23:54:24.860842 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 30 23:54:24.892918 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 30 23:54:24.892969 kernel: device-mapper: uevent: version 1.0.3 Oct 30 23:54:24.899191 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 30 23:54:24.947685 kernel: raid6: neonx8 gen() 15745 MB/s Oct 30 23:54:24.967671 kernel: raid6: neonx4 gen() 15833 MB/s Oct 30 23:54:24.987661 kernel: raid6: neonx2 gen() 13195 MB/s Oct 30 23:54:25.008668 kernel: raid6: neonx1 gen() 10520 MB/s Oct 30 23:54:25.028665 kernel: raid6: int64x8 gen() 6796 MB/s Oct 30 23:54:25.048661 kernel: raid6: int64x4 gen() 7359 MB/s Oct 30 23:54:25.069662 kernel: raid6: int64x2 gen() 6114 MB/s Oct 30 23:54:25.093052 kernel: raid6: int64x1 gen() 5062 MB/s Oct 30 23:54:25.093084 kernel: raid6: using algorithm neonx4 gen() 15833 MB/s Oct 30 23:54:25.117384 kernel: raid6: .... xor() 12290 MB/s, rmw enabled Oct 30 23:54:25.117401 kernel: raid6: using neon recovery algorithm Oct 30 23:54:25.130637 kernel: xor: measuring software checksum speed Oct 30 23:54:25.130674 kernel: 8regs : 21613 MB/sec Oct 30 23:54:25.134117 kernel: 32regs : 21670 MB/sec Oct 30 23:54:25.137507 kernel: arm64_neon : 27889 MB/sec Oct 30 23:54:25.141700 kernel: xor: using function: arm64_neon (27889 MB/sec) Oct 30 23:54:25.192680 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 30 23:54:25.202542 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 30 23:54:25.220781 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 30 23:54:25.245490 systemd-udevd[440]: Using default interface naming scheme 'v255'. Oct 30 23:54:25.249346 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 30 23:54:25.275877 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 30 23:54:25.288881 dracut-pre-trigger[446]: rd.md=0: removing MD RAID activation Oct 30 23:54:25.318143 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 30 23:54:25.332861 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 30 23:54:25.372996 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 30 23:54:25.393008 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 30 23:54:25.421049 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 30 23:54:25.436032 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 30 23:54:25.451121 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 30 23:54:25.464530 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 30 23:54:25.480804 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 30 23:54:25.498685 kernel: hv_vmbus: Vmbus version:5.3 Oct 30 23:54:25.499344 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 30 23:54:25.519970 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 30 23:54:25.525607 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 30 23:54:25.544477 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 30 23:54:25.608792 kernel: hv_vmbus: registering driver hid_hyperv Oct 30 23:54:25.608814 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 30 23:54:25.608823 kernel: hv_vmbus: registering driver hv_netvsc Oct 30 23:54:25.608832 kernel: hv_vmbus: registering driver hyperv_keyboard Oct 30 23:54:25.608842 kernel: hv_vmbus: registering driver hv_storvsc Oct 30 23:54:25.608851 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 30 23:54:25.608860 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Oct 30 23:54:25.608871 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Oct 30 23:54:25.608880 kernel: scsi host1: storvsc_host_t Oct 30 23:54:25.609032 kernel: scsi host0: storvsc_host_t Oct 30 23:54:25.609393 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Oct 30 23:54:25.562499 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 30 23:54:25.636082 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Oct 30 23:54:25.636127 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Oct 30 23:54:25.562724 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 23:54:25.658796 kernel: PTP clock support registered Oct 30 23:54:25.636103 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 23:54:25.684258 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Oct 30 23:54:25.684464 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 30 23:54:25.658956 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 23:54:25.712858 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Oct 30 23:54:25.713036 kernel: hv_utils: Registering HyperV Utility Driver Oct 30 23:54:25.713049 kernel: hv_vmbus: registering driver hv_utils Oct 30 23:54:25.703058 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 23:54:25.743168 kernel: hv_utils: Heartbeat IC version 3.0 Oct 30 23:54:25.743191 kernel: hv_netvsc 002248c1-8c80-0022-48c1-8c80002248c1 eth0: VF slot 1 added Oct 30 23:54:25.743521 kernel: hv_utils: Shutdown IC version 3.2 Oct 30 23:54:25.743535 kernel: hv_utils: TimeSync IC version 4.0 Oct 30 23:54:25.250479 systemd-resolved[255]: Clock change detected. Flushing caches. Oct 30 23:54:25.280856 systemd-journald[218]: Time jumped backwards, rotating. Oct 30 23:54:25.280916 kernel: hv_vmbus: registering driver hv_pci Oct 30 23:54:25.256654 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 30 23:54:25.519930 kernel: hv_pci 1a17ffa8-0fad-4c92-b5db-eb636eecbd84: PCI VMBus probing: Using version 0x10004 Oct 30 23:54:25.520111 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Oct 30 23:54:25.520220 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Oct 30 23:54:25.520309 kernel: sd 0:0:0:0: [sda] Write Protect is off Oct 30 23:54:25.520389 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Oct 30 23:54:25.520472 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Oct 30 23:54:25.520552 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 30 23:54:25.520563 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Oct 30 23:54:25.520645 kernel: hv_pci 1a17ffa8-0fad-4c92-b5db-eb636eecbd84: PCI host bridge to bus 0fad:00 Oct 30 23:54:25.520722 kernel: pci_bus 0fad:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Oct 30 23:54:25.520815 kernel: pci_bus 0fad:00: No busn resource found for root bus, will use [bus 00-ff] Oct 30 23:54:25.520906 kernel: pci 0fad:00:02.0: [15b3:1018] type 00 class 0x020000 Oct 30 23:54:25.520929 kernel: pci 0fad:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Oct 30 23:54:25.258912 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 23:54:25.574394 kernel: pci 0fad:00:02.0: enabling Extended Tags Oct 30 23:54:25.273556 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 23:54:25.599324 kernel: pci 0fad:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 0fad:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Oct 30 23:54:25.600298 kernel: pci_bus 0fad:00: busn_res: [bus 00-ff] end is updated to 00 Oct 30 23:54:25.600393 kernel: pci 0fad:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Oct 30 23:54:25.508558 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 23:54:25.581547 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Oct 30 23:54:25.628684 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 23:54:25.648062 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 30 23:54:25.676309 kernel: mlx5_core 0fad:00:02.0: enabling device (0000 -> 0002) Oct 30 23:54:25.683896 kernel: mlx5_core 0fad:00:02.0: firmware version: 16.31.2424 Oct 30 23:54:25.710653 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 30 23:54:25.968327 kernel: hv_netvsc 002248c1-8c80-0022-48c1-8c80002248c1 eth0: VF registering: eth1 Oct 30 23:54:25.968506 kernel: mlx5_core 0fad:00:02.0 eth1: joined to eth0 Oct 30 23:54:25.979954 kernel: mlx5_core 0fad:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Oct 30 23:54:25.989939 kernel: mlx5_core 0fad:00:02.0 enP4013s1: renamed from eth1 Oct 30 23:54:26.218106 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Oct 30 23:54:26.256333 kernel: BTRFS: device fsid 19e89659-6f9c-4c3c-9ebb-614770f236c4 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (486) Oct 30 23:54:26.258451 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Oct 30 23:54:26.278012 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (502) Oct 30 23:54:26.287280 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Oct 30 23:54:26.302653 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Oct 30 23:54:26.309836 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Oct 30 23:54:26.345039 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 30 23:54:26.373897 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 30 23:54:26.385906 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 30 23:54:27.398760 disk-uuid[609]: The operation has completed successfully. Oct 30 23:54:27.404164 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 30 23:54:27.477115 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 30 23:54:27.478906 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 30 23:54:27.526055 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 30 23:54:27.539007 sh[695]: Success Oct 30 23:54:27.569956 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 30 23:54:27.969784 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 30 23:54:27.977024 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 30 23:54:27.996060 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 30 23:54:28.027759 kernel: BTRFS info (device dm-0): first mount of filesystem 19e89659-6f9c-4c3c-9ebb-614770f236c4 Oct 30 23:54:28.027814 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 30 23:54:28.035015 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 30 23:54:28.040070 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 30 23:54:28.044340 kernel: BTRFS info (device dm-0): using free space tree Oct 30 23:54:28.461714 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 30 23:54:28.467191 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 30 23:54:28.487110 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 30 23:54:28.496052 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 30 23:54:28.541282 kernel: BTRFS info (device sda6): first mount of filesystem 69797441-c23d-4add-9f10-ca7ed5585018 Oct 30 23:54:28.541346 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Oct 30 23:54:28.545512 kernel: BTRFS info (device sda6): using free space tree Oct 30 23:54:28.598052 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 30 23:54:28.619065 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 30 23:54:28.634822 kernel: BTRFS info (device sda6): auto enabling async discard Oct 30 23:54:28.647900 kernel: BTRFS info (device sda6): last unmount of filesystem 69797441-c23d-4add-9f10-ca7ed5585018 Oct 30 23:54:28.653232 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 30 23:54:28.667875 systemd-networkd[870]: lo: Link UP Oct 30 23:54:28.667896 systemd-networkd[870]: lo: Gained carrier Oct 30 23:54:28.669925 systemd-networkd[870]: Enumeration completed Oct 30 23:54:28.670620 systemd-networkd[870]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 30 23:54:28.670624 systemd-networkd[870]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 30 23:54:28.673062 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 30 23:54:28.683773 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 30 23:54:28.697870 systemd[1]: Reached target network.target - Network. Oct 30 23:54:28.781899 kernel: mlx5_core 0fad:00:02.0 enP4013s1: Link up Oct 30 23:54:28.862897 kernel: hv_netvsc 002248c1-8c80-0022-48c1-8c80002248c1 eth0: Data path switched to VF: enP4013s1 Oct 30 23:54:28.864155 systemd-networkd[870]: enP4013s1: Link UP Oct 30 23:54:28.864378 systemd-networkd[870]: eth0: Link UP Oct 30 23:54:28.864736 systemd-networkd[870]: eth0: Gained carrier Oct 30 23:54:28.864745 systemd-networkd[870]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 30 23:54:28.889395 systemd-networkd[870]: enP4013s1: Gained carrier Oct 30 23:54:28.900919 systemd-networkd[870]: eth0: DHCPv4 address 10.200.20.15/24, gateway 10.200.20.1 acquired from 168.63.129.16 Oct 30 23:54:29.748613 ignition[878]: Ignition 2.20.0 Oct 30 23:54:29.748624 ignition[878]: Stage: fetch-offline Oct 30 23:54:29.748660 ignition[878]: no configs at "/usr/lib/ignition/base.d" Oct 30 23:54:29.757917 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 30 23:54:29.748668 ignition[878]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 30 23:54:29.752130 ignition[878]: parsed url from cmdline: "" Oct 30 23:54:29.752136 ignition[878]: no config URL provided Oct 30 23:54:29.752143 ignition[878]: reading system config file "/usr/lib/ignition/user.ign" Oct 30 23:54:29.752157 ignition[878]: no config at "/usr/lib/ignition/user.ign" Oct 30 23:54:29.784135 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 30 23:54:29.752163 ignition[878]: failed to fetch config: resource requires networking Oct 30 23:54:29.752503 ignition[878]: Ignition finished successfully Oct 30 23:54:29.814768 ignition[887]: Ignition 2.20.0 Oct 30 23:54:29.814775 ignition[887]: Stage: fetch Oct 30 23:54:29.814968 ignition[887]: no configs at "/usr/lib/ignition/base.d" Oct 30 23:54:29.814977 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 30 23:54:29.815060 ignition[887]: parsed url from cmdline: "" Oct 30 23:54:29.815063 ignition[887]: no config URL provided Oct 30 23:54:29.815068 ignition[887]: reading system config file "/usr/lib/ignition/user.ign" Oct 30 23:54:29.815074 ignition[887]: no config at "/usr/lib/ignition/user.ign" Oct 30 23:54:29.815101 ignition[887]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Oct 30 23:54:29.949645 ignition[887]: GET result: OK Oct 30 23:54:29.949747 ignition[887]: config has been read from IMDS userdata Oct 30 23:54:29.949786 ignition[887]: parsing config with SHA512: cfea11a37059bdf0a17435673717f9bbb0b5bb0e31e76022d21ce0e709215dbab345f39f762244cfa95d08709939cc5c420384550aa38bca034b2dea36379b2a Oct 30 23:54:29.954200 unknown[887]: fetched base config from "system" Oct 30 23:54:29.954611 ignition[887]: fetch: fetch complete Oct 30 23:54:29.954207 unknown[887]: fetched base config from "system" Oct 30 23:54:29.954628 ignition[887]: fetch: fetch passed Oct 30 23:54:29.954212 unknown[887]: fetched user config from "azure" Oct 30 23:54:29.954672 ignition[887]: Ignition finished successfully Oct 30 23:54:29.956672 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 30 23:54:29.980107 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 30 23:54:30.006422 ignition[894]: Ignition 2.20.0 Oct 30 23:54:30.006437 ignition[894]: Stage: kargs Oct 30 23:54:30.006610 ignition[894]: no configs at "/usr/lib/ignition/base.d" Oct 30 23:54:30.012763 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 30 23:54:30.006620 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 30 23:54:30.007736 ignition[894]: kargs: kargs passed Oct 30 23:54:30.007781 ignition[894]: Ignition finished successfully Oct 30 23:54:30.036159 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 30 23:54:30.061196 ignition[900]: Ignition 2.20.0 Oct 30 23:54:30.061207 ignition[900]: Stage: disks Oct 30 23:54:30.061366 ignition[900]: no configs at "/usr/lib/ignition/base.d" Oct 30 23:54:30.061376 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 30 23:54:30.072163 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 30 23:54:30.062299 ignition[900]: disks: disks passed Oct 30 23:54:30.081516 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 30 23:54:30.062341 ignition[900]: Ignition finished successfully Oct 30 23:54:30.093143 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 30 23:54:30.105249 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 30 23:54:30.114160 systemd[1]: Reached target sysinit.target - System Initialization. Oct 30 23:54:30.127403 systemd[1]: Reached target basic.target - Basic System. Oct 30 23:54:30.154135 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 30 23:54:30.225840 systemd-fsck[909]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Oct 30 23:54:30.237057 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 30 23:54:30.255037 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 30 23:54:30.327911 kernel: EXT4-fs (sda9): mounted filesystem 1621dc2d-b1da-466c-b741-5cdb5d67d58e r/w with ordered data mode. Quota mode: none. Oct 30 23:54:30.328947 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 30 23:54:30.333798 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 30 23:54:30.386996 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 30 23:54:30.409900 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (920) Oct 30 23:54:30.424480 kernel: BTRFS info (device sda6): first mount of filesystem 69797441-c23d-4add-9f10-ca7ed5585018 Oct 30 23:54:30.424531 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Oct 30 23:54:30.417024 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 30 23:54:30.432972 kernel: BTRFS info (device sda6): using free space tree Oct 30 23:54:30.438491 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Oct 30 23:54:30.453225 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 30 23:54:30.453283 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 30 23:54:30.490417 kernel: BTRFS info (device sda6): auto enabling async discard Oct 30 23:54:30.463735 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 30 23:54:30.485214 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 30 23:54:30.500062 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 30 23:54:30.749006 systemd-networkd[870]: eth0: Gained IPv6LL Oct 30 23:54:31.276295 coreos-metadata[935]: Oct 30 23:54:31.276 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Oct 30 23:54:31.286591 coreos-metadata[935]: Oct 30 23:54:31.286 INFO Fetch successful Oct 30 23:54:31.286591 coreos-metadata[935]: Oct 30 23:54:31.286 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Oct 30 23:54:31.302432 coreos-metadata[935]: Oct 30 23:54:31.302 INFO Fetch successful Oct 30 23:54:31.309904 coreos-metadata[935]: Oct 30 23:54:31.303 INFO wrote hostname ci-4230.2.4-n-0164ad71e3 to /sysroot/etc/hostname Oct 30 23:54:31.310846 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 30 23:54:31.686104 initrd-setup-root[950]: cut: /sysroot/etc/passwd: No such file or directory Oct 30 23:54:31.730254 initrd-setup-root[957]: cut: /sysroot/etc/group: No such file or directory Oct 30 23:54:31.767746 initrd-setup-root[964]: cut: /sysroot/etc/shadow: No such file or directory Oct 30 23:54:31.790698 initrd-setup-root[971]: cut: /sysroot/etc/gshadow: No such file or directory Oct 30 23:54:32.941824 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 30 23:54:32.964062 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 30 23:54:32.976048 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 30 23:54:32.990426 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 30 23:54:32.999392 kernel: BTRFS info (device sda6): last unmount of filesystem 69797441-c23d-4add-9f10-ca7ed5585018 Oct 30 23:54:33.024925 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 30 23:54:33.031944 ignition[1039]: INFO : Ignition 2.20.0 Oct 30 23:54:33.031944 ignition[1039]: INFO : Stage: mount Oct 30 23:54:33.031944 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 30 23:54:33.031944 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 30 23:54:33.031944 ignition[1039]: INFO : mount: mount passed Oct 30 23:54:33.031944 ignition[1039]: INFO : Ignition finished successfully Oct 30 23:54:33.037996 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 30 23:54:33.057316 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 30 23:54:33.089108 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 30 23:54:33.120926 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (1051) Oct 30 23:54:33.142043 kernel: BTRFS info (device sda6): first mount of filesystem 69797441-c23d-4add-9f10-ca7ed5585018 Oct 30 23:54:33.142087 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Oct 30 23:54:33.146955 kernel: BTRFS info (device sda6): using free space tree Oct 30 23:54:33.155910 kernel: BTRFS info (device sda6): auto enabling async discard Oct 30 23:54:33.157818 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 30 23:54:33.188906 ignition[1068]: INFO : Ignition 2.20.0 Oct 30 23:54:33.188906 ignition[1068]: INFO : Stage: files Oct 30 23:54:33.188906 ignition[1068]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 30 23:54:33.188906 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 30 23:54:33.210987 ignition[1068]: DEBUG : files: compiled without relabeling support, skipping Oct 30 23:54:33.210987 ignition[1068]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 30 23:54:33.210987 ignition[1068]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 30 23:54:33.336249 ignition[1068]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 30 23:54:33.345711 ignition[1068]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 30 23:54:33.345711 ignition[1068]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 30 23:54:33.336641 unknown[1068]: wrote ssh authorized keys file for user: core Oct 30 23:54:33.380536 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Oct 30 23:54:33.391040 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Oct 30 23:54:33.417720 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 30 23:54:33.545241 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Oct 30 23:54:33.545241 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 30 23:54:33.545241 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Oct 30 23:54:33.737145 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 30 23:54:33.831949 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 30 23:54:33.831949 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Oct 30 23:54:33.850719 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Oct 30 23:54:33.850719 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 30 23:54:33.850719 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 30 23:54:33.850719 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 30 23:54:33.850719 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 30 23:54:33.850719 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 30 23:54:33.850719 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 30 23:54:33.850719 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 30 23:54:33.850719 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 30 23:54:33.850719 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 30 23:54:33.850719 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 30 23:54:33.850719 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 30 23:54:33.850719 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Oct 30 23:54:34.367818 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Oct 30 23:54:34.653341 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 30 23:54:34.653341 ignition[1068]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Oct 30 23:54:34.704915 ignition[1068]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 30 23:54:34.716208 ignition[1068]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 30 23:54:34.716208 ignition[1068]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Oct 30 23:54:34.716208 ignition[1068]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Oct 30 23:54:34.716208 ignition[1068]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Oct 30 23:54:34.716208 ignition[1068]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 30 23:54:34.716208 ignition[1068]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 30 23:54:34.716208 ignition[1068]: INFO : files: files passed Oct 30 23:54:34.716208 ignition[1068]: INFO : Ignition finished successfully Oct 30 23:54:34.716506 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 30 23:54:34.748753 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 30 23:54:34.762143 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 30 23:54:34.840466 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 30 23:54:34.840466 initrd-setup-root-after-ignition[1095]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 30 23:54:34.789341 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 30 23:54:34.875196 initrd-setup-root-after-ignition[1099]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 30 23:54:34.789430 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 30 23:54:34.797587 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 30 23:54:34.808371 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 30 23:54:34.841086 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 30 23:54:34.886767 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 30 23:54:34.886856 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 30 23:54:34.901609 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 30 23:54:34.914281 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 30 23:54:34.926178 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 30 23:54:34.946110 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 30 23:54:34.974081 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 30 23:54:35.003095 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 30 23:54:35.027692 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 30 23:54:35.034638 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 30 23:54:35.047184 systemd[1]: Stopped target timers.target - Timer Units. Oct 30 23:54:35.058094 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 30 23:54:35.058166 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 30 23:54:35.075992 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 30 23:54:35.090827 systemd[1]: Stopped target basic.target - Basic System. Oct 30 23:54:35.101628 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 30 23:54:35.112185 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 30 23:54:35.124423 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 30 23:54:35.137204 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 30 23:54:35.148817 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 30 23:54:35.163208 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 30 23:54:35.175654 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 30 23:54:35.186580 systemd[1]: Stopped target swap.target - Swaps. Oct 30 23:54:35.196528 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 30 23:54:35.196619 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 30 23:54:35.211566 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 30 23:54:35.217722 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 30 23:54:35.229499 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 30 23:54:35.237060 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 30 23:54:35.243941 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 30 23:54:35.244013 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 30 23:54:35.262077 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 30 23:54:35.262152 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 30 23:54:35.276435 systemd[1]: ignition-files.service: Deactivated successfully. Oct 30 23:54:35.276483 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 30 23:54:35.286625 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Oct 30 23:54:35.286670 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 30 23:54:35.350581 ignition[1120]: INFO : Ignition 2.20.0 Oct 30 23:54:35.350581 ignition[1120]: INFO : Stage: umount Oct 30 23:54:35.350581 ignition[1120]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 30 23:54:35.350581 ignition[1120]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 30 23:54:35.350581 ignition[1120]: INFO : umount: umount passed Oct 30 23:54:35.350581 ignition[1120]: INFO : Ignition finished successfully Oct 30 23:54:35.320056 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 30 23:54:35.336141 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 30 23:54:35.336227 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 30 23:54:35.373010 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 30 23:54:35.381420 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 30 23:54:35.381493 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 30 23:54:35.387976 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 30 23:54:35.388023 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 30 23:54:35.406294 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 30 23:54:35.408490 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 30 23:54:35.419736 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 30 23:54:35.419819 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 30 23:54:35.434836 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 30 23:54:35.434961 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 30 23:54:35.445380 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 30 23:54:35.445456 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 30 23:54:35.468203 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 30 23:54:35.468266 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 30 23:54:35.474014 systemd[1]: Stopped target network.target - Network. Oct 30 23:54:35.483581 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 30 23:54:35.483656 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 30 23:54:35.499843 systemd[1]: Stopped target paths.target - Path Units. Oct 30 23:54:35.510820 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 30 23:54:35.516999 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 30 23:54:35.524362 systemd[1]: Stopped target slices.target - Slice Units. Oct 30 23:54:35.533941 systemd[1]: Stopped target sockets.target - Socket Units. Oct 30 23:54:35.544254 systemd[1]: iscsid.socket: Deactivated successfully. Oct 30 23:54:35.544312 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 30 23:54:35.557787 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 30 23:54:35.557833 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 30 23:54:35.573788 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 30 23:54:35.573853 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 30 23:54:35.584199 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 30 23:54:35.584249 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 30 23:54:35.595389 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 30 23:54:35.606534 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 30 23:54:35.625281 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 30 23:54:35.625832 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 30 23:54:35.625933 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 30 23:54:35.653503 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Oct 30 23:54:35.653760 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 30 23:54:35.654027 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 30 23:54:35.678425 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Oct 30 23:54:35.679448 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 30 23:54:35.679511 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 30 23:54:35.711050 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 30 23:54:35.720285 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 30 23:54:35.720357 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 30 23:54:35.732340 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 30 23:54:35.732395 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 30 23:54:35.748765 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 30 23:54:35.920188 kernel: hv_netvsc 002248c1-8c80-0022-48c1-8c80002248c1 eth0: Data path switched from VF: enP4013s1 Oct 30 23:54:35.748817 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 30 23:54:35.754783 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 30 23:54:35.754827 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 30 23:54:35.774394 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 30 23:54:35.784581 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 30 23:54:35.784647 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Oct 30 23:54:35.821896 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 30 23:54:35.822099 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 30 23:54:35.832976 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 30 23:54:35.833020 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 30 23:54:35.846914 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 30 23:54:35.846947 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 30 23:54:35.859349 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 30 23:54:35.859405 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 30 23:54:35.872962 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 30 23:54:35.873043 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 30 23:54:35.890758 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 30 23:54:35.890825 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 30 23:54:35.939083 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 30 23:54:35.957412 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 30 23:54:35.957488 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 30 23:54:35.978030 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 30 23:54:35.978088 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 23:54:35.992486 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 30 23:54:35.992552 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Oct 30 23:54:35.992900 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 30 23:54:35.993009 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 30 23:54:36.006357 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 30 23:54:36.006438 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 30 23:54:36.019398 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 30 23:54:36.019491 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 30 23:54:36.031515 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 30 23:54:36.041668 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 30 23:54:36.041758 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 30 23:54:36.073158 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 30 23:54:36.106127 systemd[1]: Switching root. Oct 30 23:54:36.217302 systemd-journald[218]: Journal stopped Oct 30 23:55:01.535050 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Oct 30 23:55:01.535076 kernel: SELinux: policy capability network_peer_controls=1 Oct 30 23:55:01.535087 kernel: SELinux: policy capability open_perms=1 Oct 30 23:55:01.535098 kernel: SELinux: policy capability extended_socket_class=1 Oct 30 23:55:01.535106 kernel: SELinux: policy capability always_check_network=0 Oct 30 23:55:01.535114 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 30 23:55:01.535122 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 30 23:55:01.535131 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 30 23:55:01.535139 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 30 23:55:01.535148 systemd[1]: Successfully loaded SELinux policy in 778.449ms. Oct 30 23:55:01.535159 kernel: audit: type=1403 audit(1761868478.976:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 30 23:55:01.535168 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.520ms. Oct 30 23:55:01.535178 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 30 23:55:01.535187 systemd[1]: Detected virtualization microsoft. Oct 30 23:55:01.535197 systemd[1]: Detected architecture arm64. Oct 30 23:55:01.535208 systemd[1]: Detected first boot. Oct 30 23:55:01.535217 systemd[1]: Hostname set to . Oct 30 23:55:01.535226 systemd[1]: Initializing machine ID from random generator. Oct 30 23:55:01.535235 zram_generator::config[1165]: No configuration found. Oct 30 23:55:01.535245 kernel: NET: Registered PF_VSOCK protocol family Oct 30 23:55:01.535253 systemd[1]: Populated /etc with preset unit settings. Oct 30 23:55:01.535264 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Oct 30 23:55:01.535274 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 30 23:55:01.535283 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 30 23:55:01.535291 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 30 23:55:01.535301 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 30 23:55:01.535311 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 30 23:55:01.535320 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 30 23:55:01.535330 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 30 23:55:01.535341 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 30 23:55:01.535351 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 30 23:55:01.535360 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 30 23:55:01.535371 systemd[1]: Created slice user.slice - User and Session Slice. Oct 30 23:55:01.535380 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 30 23:55:01.535390 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 30 23:55:01.535400 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 30 23:55:01.535410 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 30 23:55:01.535421 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 30 23:55:01.535430 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 30 23:55:01.535439 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 30 23:55:01.535451 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 30 23:55:01.535460 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 30 23:55:01.535470 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 30 23:55:01.535479 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 30 23:55:01.535491 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 30 23:55:01.535500 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 30 23:55:01.535510 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 30 23:55:01.535519 systemd[1]: Reached target slices.target - Slice Units. Oct 30 23:55:01.535528 systemd[1]: Reached target swap.target - Swaps. Oct 30 23:55:01.535537 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 30 23:55:01.535546 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 30 23:55:01.535555 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 30 23:55:01.535567 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 30 23:55:01.535576 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 30 23:55:01.535586 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 30 23:55:01.535595 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 30 23:55:01.535605 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 30 23:55:01.535616 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 30 23:55:01.535626 systemd[1]: Mounting media.mount - External Media Directory... Oct 30 23:55:01.535635 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 30 23:55:01.535644 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 30 23:55:01.535654 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 30 23:55:01.535664 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 30 23:55:01.535674 systemd[1]: Reached target machines.target - Containers. Oct 30 23:55:01.535683 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 30 23:55:01.535693 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 23:55:01.535704 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 30 23:55:01.535714 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 30 23:55:01.535723 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 30 23:55:01.535733 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 30 23:55:01.535743 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 30 23:55:01.535753 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 30 23:55:01.535763 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 30 23:55:01.535772 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 30 23:55:01.535784 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 30 23:55:01.535793 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 30 23:55:01.535802 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 30 23:55:01.535812 systemd[1]: Stopped systemd-fsck-usr.service. Oct 30 23:55:01.535823 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 23:55:01.535833 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 30 23:55:01.535842 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 30 23:55:01.535852 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 30 23:55:01.535864 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 30 23:55:01.535873 kernel: loop: module loaded Oct 30 23:55:01.535892 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 30 23:55:01.535904 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 30 23:55:01.535914 systemd[1]: verity-setup.service: Deactivated successfully. Oct 30 23:55:01.535923 systemd[1]: Stopped verity-setup.service. Oct 30 23:55:01.535933 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 30 23:55:01.535942 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 30 23:55:01.535951 systemd[1]: Mounted media.mount - External Media Directory. Oct 30 23:55:01.535963 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 30 23:55:01.535972 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 30 23:55:01.535982 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 30 23:55:01.535991 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 30 23:55:01.536000 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 30 23:55:01.536010 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 30 23:55:01.536019 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 30 23:55:01.536028 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 30 23:55:01.536039 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 30 23:55:01.536048 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 30 23:55:01.536058 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 30 23:55:01.536087 systemd-journald[1245]: Collecting audit messages is disabled. Oct 30 23:55:01.536112 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 30 23:55:01.536123 systemd-journald[1245]: Journal started Oct 30 23:55:01.536143 systemd-journald[1245]: Runtime Journal (/run/log/journal/23bbbf22492a4a91adfa840cd1c3b3fb) is 8M, max 78.5M, 70.5M free. Oct 30 23:54:59.434261 systemd[1]: Queued start job for default target multi-user.target. Oct 30 23:54:59.438670 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Oct 30 23:54:59.439040 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 30 23:54:59.439350 systemd[1]: systemd-journald.service: Consumed 3.306s CPU time. Oct 30 23:55:01.548926 kernel: fuse: init (API version 7.39) Oct 30 23:55:01.549911 systemd[1]: Started systemd-journald.service - Journal Service. Oct 30 23:55:01.560523 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 30 23:55:01.560711 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 30 23:55:01.567287 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 30 23:55:01.590006 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 30 23:55:01.597816 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 30 23:55:01.604083 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 30 23:55:01.604124 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 30 23:55:01.610788 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 30 23:55:01.618619 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 30 23:55:01.625863 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 30 23:55:01.631687 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 23:55:01.762900 kernel: ACPI: bus type drm_connector registered Oct 30 23:55:01.818030 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 30 23:55:01.825380 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 30 23:55:01.831841 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 30 23:55:01.833122 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 30 23:55:01.839213 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 30 23:55:01.840563 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 30 23:55:01.848937 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 30 23:55:01.849212 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 30 23:55:01.855407 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 30 23:55:01.863554 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 30 23:55:01.869984 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 30 23:55:01.876557 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 30 23:55:01.884919 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 30 23:55:01.894976 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 30 23:55:01.909224 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 30 23:55:02.214602 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 30 23:55:02.261974 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 30 23:55:02.275155 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 30 23:55:02.285740 udevadm[1295]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 30 23:55:02.374823 systemd-journald[1245]: Time spent on flushing to /var/log/journal/23bbbf22492a4a91adfa840cd1c3b3fb is 1.155784s for 916 entries. Oct 30 23:55:02.374823 systemd-journald[1245]: System Journal (/var/log/journal/23bbbf22492a4a91adfa840cd1c3b3fb) is 11.8M, max 2.6G, 2.6G free. Oct 30 23:55:06.598182 systemd-journald[1245]: Received client request to flush runtime journal. Oct 30 23:55:06.598232 kernel: loop0: detected capacity change from 0 to 123192 Oct 30 23:55:06.598255 systemd-journald[1245]: /var/log/journal/23bbbf22492a4a91adfa840cd1c3b3fb/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Oct 30 23:55:06.598322 systemd-journald[1245]: Rotating system journal. Oct 30 23:55:03.014353 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 30 23:55:03.159061 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 30 23:55:03.165684 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 30 23:55:03.176040 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 30 23:55:03.302374 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 30 23:55:03.332153 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 30 23:55:06.600366 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 30 23:55:06.915907 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 30 23:55:07.513465 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 30 23:55:07.524098 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 30 23:55:07.970906 kernel: loop1: detected capacity change from 0 to 28720 Oct 30 23:55:08.674355 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 30 23:55:09.415462 systemd-tmpfiles[1324]: ACLs are not supported, ignoring. Oct 30 23:55:09.415482 systemd-tmpfiles[1324]: ACLs are not supported, ignoring. Oct 30 23:55:09.419936 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 30 23:55:09.436052 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 30 23:55:09.457674 systemd-udevd[1328]: Using default interface naming scheme 'v255'. Oct 30 23:55:09.797825 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 30 23:55:09.799076 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 30 23:55:11.948911 kernel: loop2: detected capacity change from 0 to 207008 Oct 30 23:55:12.803911 kernel: loop3: detected capacity change from 0 to 113512 Oct 30 23:55:13.210416 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 30 23:55:13.228068 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 30 23:55:13.276050 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Oct 30 23:55:13.292483 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 30 23:55:13.571749 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 30 23:55:13.723907 kernel: mousedev: PS/2 mouse device common for all mice Oct 30 23:55:13.880501 kernel: hv_vmbus: registering driver hv_balloon Oct 30 23:55:13.880597 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Oct 30 23:55:13.884477 kernel: hv_balloon: Memory hot add disabled on ARM64 Oct 30 23:55:13.979166 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 23:55:14.222943 kernel: hv_vmbus: registering driver hyperv_fb Oct 30 23:55:14.233802 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Oct 30 23:55:14.233908 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Oct 30 23:55:14.239984 kernel: Console: switching to colour dummy device 80x25 Oct 30 23:55:14.241900 kernel: Console: switching to colour frame buffer device 128x48 Oct 30 23:55:14.361721 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 30 23:55:14.361963 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 23:55:14.369808 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Oct 30 23:55:14.375071 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 23:55:14.436908 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1337) Oct 30 23:55:14.629427 systemd-networkd[1341]: lo: Link UP Oct 30 23:55:14.629773 systemd-networkd[1341]: lo: Gained carrier Oct 30 23:55:14.631676 systemd-networkd[1341]: Enumeration completed Oct 30 23:55:14.631862 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 30 23:55:14.632260 systemd-networkd[1341]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 30 23:55:14.632339 systemd-networkd[1341]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 30 23:55:14.644045 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 30 23:55:14.656288 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 30 23:55:14.694914 kernel: mlx5_core 0fad:00:02.0 enP4013s1: Link up Oct 30 23:55:14.741902 kernel: hv_netvsc 002248c1-8c80-0022-48c1-8c80002248c1 eth0: Data path switched to VF: enP4013s1 Oct 30 23:55:14.742500 systemd-networkd[1341]: enP4013s1: Link UP Oct 30 23:55:14.742589 systemd-networkd[1341]: eth0: Link UP Oct 30 23:55:14.742598 systemd-networkd[1341]: eth0: Gained carrier Oct 30 23:55:14.742611 systemd-networkd[1341]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 30 23:55:14.748108 systemd-networkd[1341]: enP4013s1: Gained carrier Oct 30 23:55:14.757945 systemd-networkd[1341]: eth0: DHCPv4 address 10.200.20.15/24, gateway 10.200.20.1 acquired from 168.63.129.16 Oct 30 23:55:14.892818 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Oct 30 23:55:14.900488 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 30 23:55:14.911088 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 30 23:55:15.006944 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 30 23:55:15.066980 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 30 23:55:15.081069 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 30 23:55:15.097906 kernel: loop4: detected capacity change from 0 to 123192 Oct 30 23:55:15.115073 kernel: loop5: detected capacity change from 0 to 28720 Oct 30 23:55:15.131199 kernel: loop6: detected capacity change from 0 to 207008 Oct 30 23:55:15.153950 kernel: loop7: detected capacity change from 0 to 113512 Oct 30 23:55:15.163832 (sd-merge)[1454]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Oct 30 23:55:15.164284 (sd-merge)[1454]: Merged extensions into '/usr'. Oct 30 23:55:15.168685 systemd[1]: Reload requested from client PID 1284 ('systemd-sysext') (unit systemd-sysext.service)... Oct 30 23:55:15.168834 systemd[1]: Reloading... Oct 30 23:55:15.171208 lvm[1453]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 30 23:55:15.238920 zram_generator::config[1482]: No configuration found. Oct 30 23:55:15.375684 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 30 23:55:15.478100 systemd[1]: Reloading finished in 308 ms. Oct 30 23:55:15.498727 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 30 23:55:15.506365 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 30 23:55:15.519597 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 30 23:55:15.529962 systemd[1]: Starting ensure-sysext.service... Oct 30 23:55:15.537083 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 30 23:55:15.542906 lvm[1543]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 30 23:55:15.546130 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 30 23:55:15.559755 systemd[1]: Reload requested from client PID 1542 ('systemctl') (unit ensure-sysext.service)... Oct 30 23:55:15.559770 systemd[1]: Reloading... Oct 30 23:55:15.570913 systemd-tmpfiles[1544]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 30 23:55:15.571410 systemd-tmpfiles[1544]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 30 23:55:15.572153 systemd-tmpfiles[1544]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 30 23:55:15.572448 systemd-tmpfiles[1544]: ACLs are not supported, ignoring. Oct 30 23:55:15.572496 systemd-tmpfiles[1544]: ACLs are not supported, ignoring. Oct 30 23:55:15.641914 zram_generator::config[1581]: No configuration found. Oct 30 23:55:15.650888 systemd-tmpfiles[1544]: Detected autofs mount point /boot during canonicalization of boot. Oct 30 23:55:15.650899 systemd-tmpfiles[1544]: Skipping /boot Oct 30 23:55:15.662167 systemd-tmpfiles[1544]: Detected autofs mount point /boot during canonicalization of boot. Oct 30 23:55:15.662270 systemd-tmpfiles[1544]: Skipping /boot Oct 30 23:55:15.747136 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 30 23:55:15.844856 systemd[1]: Reloading finished in 283 ms. Oct 30 23:55:15.870933 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 30 23:55:15.878386 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 30 23:55:15.898122 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 30 23:55:15.977111 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 30 23:55:15.985072 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 30 23:55:15.996654 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 30 23:55:16.004410 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 30 23:55:16.015064 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 23:55:16.022733 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 30 23:55:16.032942 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 30 23:55:16.052444 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 30 23:55:16.058382 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 23:55:16.058513 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 23:55:16.061733 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 30 23:55:16.061939 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 30 23:55:16.070543 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 30 23:55:16.070710 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 30 23:55:16.081875 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 30 23:55:16.082061 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 30 23:55:16.101866 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 30 23:55:16.112185 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 23:55:16.117101 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 30 23:55:16.124840 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 30 23:55:16.134634 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 30 23:55:16.144165 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 30 23:55:16.150387 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 23:55:16.150513 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 23:55:16.150652 systemd[1]: Reached target time-set.target - System Time Set. Oct 30 23:55:16.158303 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 30 23:55:16.158498 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 30 23:55:16.165486 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 23:55:16.172044 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 30 23:55:16.172202 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 30 23:55:16.179077 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 30 23:55:16.179319 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 30 23:55:16.186861 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 30 23:55:16.187219 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 30 23:55:16.197548 systemd[1]: Finished ensure-sysext.service. Oct 30 23:55:16.206280 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 30 23:55:16.206297 systemd-resolved[1641]: Positive Trust Anchors: Oct 30 23:55:16.206308 systemd-resolved[1641]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 30 23:55:16.206338 systemd-resolved[1641]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 30 23:55:16.206719 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 30 23:55:16.255586 systemd-resolved[1641]: Using system hostname 'ci-4230.2.4-n-0164ad71e3'. Oct 30 23:55:16.259343 augenrules[1680]: No rules Oct 30 23:55:16.260711 systemd[1]: audit-rules.service: Deactivated successfully. Oct 30 23:55:16.262000 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 30 23:55:16.273747 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 30 23:55:16.280055 systemd[1]: Reached target network.target - Network. Oct 30 23:55:16.284918 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 30 23:55:16.383786 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 30 23:55:16.509016 systemd-networkd[1341]: eth0: Gained IPv6LL Oct 30 23:55:16.511493 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 30 23:55:16.519136 systemd[1]: Reached target network-online.target - Network is Online. Oct 30 23:55:24.907791 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 30 23:55:24.915694 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 30 23:55:36.270935 ldconfig[1273]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 30 23:55:36.293386 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 30 23:55:36.306075 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 30 23:55:36.335918 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 30 23:55:36.343454 systemd[1]: Reached target sysinit.target - System Initialization. Oct 30 23:55:36.350385 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 30 23:55:36.357472 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 30 23:55:36.364863 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 30 23:55:36.370639 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 30 23:55:36.377450 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 30 23:55:36.384193 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 30 23:55:36.384231 systemd[1]: Reached target paths.target - Path Units. Oct 30 23:55:36.389039 systemd[1]: Reached target timers.target - Timer Units. Oct 30 23:55:36.437062 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 30 23:55:36.445100 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 30 23:55:36.452358 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 30 23:55:36.460039 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 30 23:55:36.467168 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 30 23:55:36.475544 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 30 23:55:36.482074 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 30 23:55:36.489833 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 30 23:55:36.496045 systemd[1]: Reached target sockets.target - Socket Units. Oct 30 23:55:36.501406 systemd[1]: Reached target basic.target - Basic System. Oct 30 23:55:36.506946 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 30 23:55:36.506980 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 30 23:55:36.528984 systemd[1]: Starting chronyd.service - NTP client/server... Oct 30 23:55:36.540018 systemd[1]: Starting containerd.service - containerd container runtime... Oct 30 23:55:36.548056 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 30 23:55:36.559280 (chronyd)[1697]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Oct 30 23:55:36.559554 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 30 23:55:36.567838 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 30 23:55:36.574781 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 30 23:55:36.581053 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 30 23:55:36.581098 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Oct 30 23:55:36.583268 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Oct 30 23:55:36.589859 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Oct 30 23:55:36.590989 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 23:55:36.599700 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 30 23:55:36.603520 KVP[1706]: KVP starting; pid is:1706 Oct 30 23:55:36.611276 chronyd[1710]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Oct 30 23:55:36.613621 kernel: hv_utils: KVP IC version 4.0 Oct 30 23:55:36.612863 KVP[1706]: KVP LIC Version: 3.1 Oct 30 23:55:36.614289 jq[1704]: false Oct 30 23:55:36.620801 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 30 23:55:36.629091 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 30 23:55:36.637327 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 30 23:55:36.649038 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 30 23:55:36.657111 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 30 23:55:36.664858 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 30 23:55:36.665384 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 30 23:55:36.666566 systemd[1]: Starting update-engine.service - Update Engine... Oct 30 23:55:36.675875 extend-filesystems[1705]: Found loop4 Oct 30 23:55:36.682346 extend-filesystems[1705]: Found loop5 Oct 30 23:55:36.682346 extend-filesystems[1705]: Found loop6 Oct 30 23:55:36.682346 extend-filesystems[1705]: Found loop7 Oct 30 23:55:36.682346 extend-filesystems[1705]: Found sda Oct 30 23:55:36.682346 extend-filesystems[1705]: Found sda1 Oct 30 23:55:36.682346 extend-filesystems[1705]: Found sda2 Oct 30 23:55:36.682346 extend-filesystems[1705]: Found sda3 Oct 30 23:55:36.682346 extend-filesystems[1705]: Found usr Oct 30 23:55:36.682346 extend-filesystems[1705]: Found sda4 Oct 30 23:55:36.682346 extend-filesystems[1705]: Found sda6 Oct 30 23:55:36.682346 extend-filesystems[1705]: Found sda7 Oct 30 23:55:36.682346 extend-filesystems[1705]: Found sda9 Oct 30 23:55:36.682346 extend-filesystems[1705]: Checking size of /dev/sda9 Oct 30 23:55:36.706650 chronyd[1710]: Timezone right/UTC failed leap second check, ignoring Oct 30 23:55:36.836615 extend-filesystems[1705]: Old size kept for /dev/sda9 Oct 30 23:55:36.836615 extend-filesystems[1705]: Found sr0 Oct 30 23:55:36.688611 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 30 23:55:36.710974 chronyd[1710]: Loaded seccomp filter (level 2) Oct 30 23:55:36.895755 update_engine[1721]: I20251030 23:55:36.827872 1721 main.cc:92] Flatcar Update Engine starting Oct 30 23:55:36.724130 systemd[1]: Started chronyd.service - NTP client/server. Oct 30 23:55:36.735342 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 30 23:55:36.900221 jq[1722]: true Oct 30 23:55:36.735528 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 30 23:55:36.740537 systemd[1]: motdgen.service: Deactivated successfully. Oct 30 23:55:36.900542 tar[1735]: linux-arm64/LICENSE Oct 30 23:55:36.900542 tar[1735]: linux-arm64/helm Oct 30 23:55:36.740716 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 30 23:55:36.753774 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 30 23:55:36.903486 jq[1736]: true Oct 30 23:55:36.754970 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 30 23:55:36.789438 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 30 23:55:36.789615 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 30 23:55:36.822084 (ntainerd)[1743]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 30 23:55:36.832848 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 30 23:55:36.919822 systemd-logind[1718]: New seat seat0. Oct 30 23:55:36.923625 systemd-logind[1718]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 30 23:55:36.923851 systemd[1]: Started systemd-logind.service - User Login Management. Oct 30 23:55:37.031975 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1762) Oct 30 23:55:38.313366 dbus-daemon[1700]: [system] SELinux support is enabled Oct 30 23:55:38.322667 dbus-daemon[1700]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 30 23:55:38.459483 update_engine[1721]: I20251030 23:55:38.316458 1721 update_check_scheduler.cc:74] Next update check in 4m43s Oct 30 23:55:38.313694 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 30 23:55:38.322057 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 30 23:55:38.322080 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 30 23:55:38.329520 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 30 23:55:38.329536 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 30 23:55:38.337018 systemd[1]: Started update-engine.service - Update Engine. Oct 30 23:55:38.349215 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 30 23:55:38.550161 bash[1786]: Updated "/home/core/.ssh/authorized_keys" Oct 30 23:55:38.552394 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 30 23:55:38.562840 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 30 23:55:38.572572 coreos-metadata[1699]: Oct 30 23:55:38.572 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Oct 30 23:55:38.577766 coreos-metadata[1699]: Oct 30 23:55:38.576 INFO Fetch successful Oct 30 23:55:38.577766 coreos-metadata[1699]: Oct 30 23:55:38.576 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Oct 30 23:55:38.582915 coreos-metadata[1699]: Oct 30 23:55:38.582 INFO Fetch successful Oct 30 23:55:38.582915 coreos-metadata[1699]: Oct 30 23:55:38.582 INFO Fetching http://168.63.129.16/machine/5c34d4a4-eb9d-40cd-9d09-9cff39e29d15/f35b90b3%2D4a84%2D4040%2Db7d4%2D4ea6493b4c22.%5Fci%2D4230.2.4%2Dn%2D0164ad71e3?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Oct 30 23:55:38.585457 coreos-metadata[1699]: Oct 30 23:55:38.585 INFO Fetch successful Oct 30 23:55:38.585457 coreos-metadata[1699]: Oct 30 23:55:38.585 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Oct 30 23:55:38.597848 coreos-metadata[1699]: Oct 30 23:55:38.597 INFO Fetch successful Oct 30 23:55:38.629393 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 30 23:55:38.638536 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 30 23:55:38.655791 tar[1735]: linux-arm64/README.md Oct 30 23:55:38.666029 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 30 23:55:38.851377 sshd_keygen[1732]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 30 23:55:38.869556 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 30 23:55:38.882169 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 30 23:55:38.889140 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Oct 30 23:55:38.896873 systemd[1]: issuegen.service: Deactivated successfully. Oct 30 23:55:38.898921 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 30 23:55:38.915057 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 30 23:55:38.922998 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Oct 30 23:55:39.030855 locksmithd[1848]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 30 23:55:39.335889 containerd[1743]: time="2025-10-30T23:55:39.334786820Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Oct 30 23:55:39.359642 containerd[1743]: time="2025-10-30T23:55:39.359607220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 30 23:55:39.364918 containerd[1743]: time="2025-10-30T23:55:39.363308500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 30 23:55:39.364918 containerd[1743]: time="2025-10-30T23:55:39.363343860Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 30 23:55:39.364918 containerd[1743]: time="2025-10-30T23:55:39.363362260Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 30 23:55:39.364918 containerd[1743]: time="2025-10-30T23:55:39.363525580Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 30 23:55:39.364918 containerd[1743]: time="2025-10-30T23:55:39.363541860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 30 23:55:39.364918 containerd[1743]: time="2025-10-30T23:55:39.363600740Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 30 23:55:39.364918 containerd[1743]: time="2025-10-30T23:55:39.363614460Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 30 23:55:39.364918 containerd[1743]: time="2025-10-30T23:55:39.363802460Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 30 23:55:39.364918 containerd[1743]: time="2025-10-30T23:55:39.363815700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 30 23:55:39.364918 containerd[1743]: time="2025-10-30T23:55:39.363828860Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 30 23:55:39.364918 containerd[1743]: time="2025-10-30T23:55:39.363837380Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 30 23:55:39.365185 containerd[1743]: time="2025-10-30T23:55:39.363934100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 30 23:55:39.365185 containerd[1743]: time="2025-10-30T23:55:39.364165460Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 30 23:55:39.365185 containerd[1743]: time="2025-10-30T23:55:39.364287900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 30 23:55:39.365185 containerd[1743]: time="2025-10-30T23:55:39.364300980Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 30 23:55:39.365185 containerd[1743]: time="2025-10-30T23:55:39.364369020Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 30 23:55:39.365185 containerd[1743]: time="2025-10-30T23:55:39.364407740Z" level=info msg="metadata content store policy set" policy=shared Oct 30 23:55:39.373116 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 23:55:39.380314 (kubelet)[1892]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 23:55:39.381704 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 30 23:55:39.395178 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 30 23:55:39.403232 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 30 23:55:39.413496 systemd[1]: Reached target getty.target - Login Prompts. Oct 30 23:55:39.858759 kubelet[1892]: E1030 23:55:39.806292 1892 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 23:55:39.808510 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 23:55:39.808645 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 23:55:39.808989 systemd[1]: kubelet.service: Consumed 717ms CPU time, 258.5M memory peak. Oct 30 23:55:39.907799 containerd[1743]: time="2025-10-30T23:55:39.907741860Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 30 23:55:39.907933 containerd[1743]: time="2025-10-30T23:55:39.907828700Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 30 23:55:39.907933 containerd[1743]: time="2025-10-30T23:55:39.907846500Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 30 23:55:39.907933 containerd[1743]: time="2025-10-30T23:55:39.907862940Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 30 23:55:39.907933 containerd[1743]: time="2025-10-30T23:55:39.907877420Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 30 23:55:39.908090 containerd[1743]: time="2025-10-30T23:55:39.908064780Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 30 23:55:39.908369 containerd[1743]: time="2025-10-30T23:55:39.908323980Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 30 23:55:39.908481 containerd[1743]: time="2025-10-30T23:55:39.908457340Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 30 23:55:39.908518 containerd[1743]: time="2025-10-30T23:55:39.908483020Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 30 23:55:39.908518 containerd[1743]: time="2025-10-30T23:55:39.908501020Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 30 23:55:39.908518 containerd[1743]: time="2025-10-30T23:55:39.908514980Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 30 23:55:39.908571 containerd[1743]: time="2025-10-30T23:55:39.908527900Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 30 23:55:39.908571 containerd[1743]: time="2025-10-30T23:55:39.908540140Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 30 23:55:39.908571 containerd[1743]: time="2025-10-30T23:55:39.908553740Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 30 23:55:39.908571 containerd[1743]: time="2025-10-30T23:55:39.908568020Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 30 23:55:39.908639 containerd[1743]: time="2025-10-30T23:55:39.908581020Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 30 23:55:39.908639 containerd[1743]: time="2025-10-30T23:55:39.908594940Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 30 23:55:39.908639 containerd[1743]: time="2025-10-30T23:55:39.908606980Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 30 23:55:39.908639 containerd[1743]: time="2025-10-30T23:55:39.908626780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 30 23:55:39.908696 containerd[1743]: time="2025-10-30T23:55:39.908640060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 30 23:55:39.908696 containerd[1743]: time="2025-10-30T23:55:39.908651620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 30 23:55:39.908696 containerd[1743]: time="2025-10-30T23:55:39.908664220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 30 23:55:39.908696 containerd[1743]: time="2025-10-30T23:55:39.908681860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 30 23:55:39.908771 containerd[1743]: time="2025-10-30T23:55:39.908694500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 30 23:55:39.908771 containerd[1743]: time="2025-10-30T23:55:39.908706140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 30 23:55:39.908771 containerd[1743]: time="2025-10-30T23:55:39.908718780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 30 23:55:39.908771 containerd[1743]: time="2025-10-30T23:55:39.908731060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 30 23:55:39.908771 containerd[1743]: time="2025-10-30T23:55:39.908748540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 30 23:55:39.908771 containerd[1743]: time="2025-10-30T23:55:39.908760340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 30 23:55:39.908862 containerd[1743]: time="2025-10-30T23:55:39.908771660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 30 23:55:39.908862 containerd[1743]: time="2025-10-30T23:55:39.908783780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 30 23:55:39.908862 containerd[1743]: time="2025-10-30T23:55:39.908798460Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 30 23:55:39.908862 containerd[1743]: time="2025-10-30T23:55:39.908818260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 30 23:55:39.908862 containerd[1743]: time="2025-10-30T23:55:39.908830700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 30 23:55:39.908862 containerd[1743]: time="2025-10-30T23:55:39.908841540Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 30 23:55:39.909071 containerd[1743]: time="2025-10-30T23:55:39.908911500Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 30 23:55:39.909071 containerd[1743]: time="2025-10-30T23:55:39.908932420Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 30 23:55:39.909071 containerd[1743]: time="2025-10-30T23:55:39.908943100Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 30 23:55:39.909071 containerd[1743]: time="2025-10-30T23:55:39.908955060Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 30 23:55:39.909071 containerd[1743]: time="2025-10-30T23:55:39.908964300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 30 23:55:39.909071 containerd[1743]: time="2025-10-30T23:55:39.908976020Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 30 23:55:39.909071 containerd[1743]: time="2025-10-30T23:55:39.908985500Z" level=info msg="NRI interface is disabled by configuration." Oct 30 23:55:39.909071 containerd[1743]: time="2025-10-30T23:55:39.908994700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 30 23:55:39.909310 containerd[1743]: time="2025-10-30T23:55:39.909260500Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 30 23:55:39.909427 containerd[1743]: time="2025-10-30T23:55:39.909314820Z" level=info msg="Connect containerd service" Oct 30 23:55:39.909427 containerd[1743]: time="2025-10-30T23:55:39.909344140Z" level=info msg="using legacy CRI server" Oct 30 23:55:39.909427 containerd[1743]: time="2025-10-30T23:55:39.909350540Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 30 23:55:39.909480 containerd[1743]: time="2025-10-30T23:55:39.909462900Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 30 23:55:39.911107 containerd[1743]: time="2025-10-30T23:55:39.910489700Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 30 23:55:39.911107 containerd[1743]: time="2025-10-30T23:55:39.911008900Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 30 23:55:39.911107 containerd[1743]: time="2025-10-30T23:55:39.911056140Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 30 23:55:39.911419 containerd[1743]: time="2025-10-30T23:55:39.911238980Z" level=info msg="Start subscribing containerd event" Oct 30 23:55:39.911456 containerd[1743]: time="2025-10-30T23:55:39.911439700Z" level=info msg="Start recovering state" Oct 30 23:55:39.911525 containerd[1743]: time="2025-10-30T23:55:39.911508060Z" level=info msg="Start event monitor" Oct 30 23:55:39.911525 containerd[1743]: time="2025-10-30T23:55:39.911523220Z" level=info msg="Start snapshots syncer" Oct 30 23:55:39.911587 containerd[1743]: time="2025-10-30T23:55:39.911532660Z" level=info msg="Start cni network conf syncer for default" Oct 30 23:55:39.911587 containerd[1743]: time="2025-10-30T23:55:39.911539900Z" level=info msg="Start streaming server" Oct 30 23:55:39.911624 containerd[1743]: time="2025-10-30T23:55:39.911606380Z" level=info msg="containerd successfully booted in 0.578366s" Oct 30 23:55:39.912019 systemd[1]: Started containerd.service - containerd container runtime. Oct 30 23:55:39.920348 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 30 23:55:39.928968 systemd[1]: Startup finished in 666ms (kernel) + 14.942s (initrd) + 1min 1.729s (userspace) = 1min 17.339s. Oct 30 23:55:43.509684 login[1894]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Oct 30 23:55:43.511121 login[1895]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:55:43.523303 systemd-logind[1718]: New session 1 of user core. Oct 30 23:55:43.524773 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 30 23:55:43.535118 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 30 23:55:44.087996 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 30 23:55:44.095223 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 30 23:55:44.169620 (systemd)[1908]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 30 23:55:44.172414 systemd-logind[1718]: New session c1 of user core. Oct 30 23:55:44.510141 login[1894]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:55:44.515085 systemd-logind[1718]: New session 2 of user core. Oct 30 23:55:45.734846 systemd[1908]: Queued start job for default target default.target. Oct 30 23:55:45.745749 systemd[1908]: Created slice app.slice - User Application Slice. Oct 30 23:55:45.745939 systemd[1908]: Reached target paths.target - Paths. Oct 30 23:55:45.746071 systemd[1908]: Reached target timers.target - Timers. Oct 30 23:55:45.747351 systemd[1908]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 30 23:55:45.756715 systemd[1908]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 30 23:55:45.756773 systemd[1908]: Reached target sockets.target - Sockets. Oct 30 23:55:45.756813 systemd[1908]: Reached target basic.target - Basic System. Oct 30 23:55:45.756853 systemd[1908]: Reached target default.target - Main User Target. Oct 30 23:55:45.756904 systemd[1908]: Startup finished in 1.578s. Oct 30 23:55:45.757070 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 30 23:55:45.762009 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 30 23:55:45.763420 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 30 23:55:47.497902 waagent[1877]: 2025-10-30T23:55:47.495747Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Oct 30 23:55:47.501915 waagent[1877]: 2025-10-30T23:55:47.501841Z INFO Daemon Daemon OS: flatcar 4230.2.4 Oct 30 23:55:47.506375 waagent[1877]: 2025-10-30T23:55:47.506323Z INFO Daemon Daemon Python: 3.11.11 Oct 30 23:55:47.511061 waagent[1877]: 2025-10-30T23:55:47.510898Z INFO Daemon Daemon Run daemon Oct 30 23:55:47.515257 waagent[1877]: 2025-10-30T23:55:47.515211Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4230.2.4' Oct 30 23:55:47.523971 waagent[1877]: 2025-10-30T23:55:47.523917Z INFO Daemon Daemon Using waagent for provisioning Oct 30 23:55:47.529102 waagent[1877]: 2025-10-30T23:55:47.529060Z INFO Daemon Daemon Activate resource disk Oct 30 23:55:47.533926 waagent[1877]: 2025-10-30T23:55:47.533865Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Oct 30 23:55:47.546113 waagent[1877]: 2025-10-30T23:55:47.546055Z INFO Daemon Daemon Found device: None Oct 30 23:55:47.551101 waagent[1877]: 2025-10-30T23:55:47.551045Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Oct 30 23:55:47.559262 waagent[1877]: 2025-10-30T23:55:47.559214Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Oct 30 23:55:47.570466 waagent[1877]: 2025-10-30T23:55:47.570407Z INFO Daemon Daemon Clean protocol and wireserver endpoint Oct 30 23:55:47.575977 waagent[1877]: 2025-10-30T23:55:47.575930Z INFO Daemon Daemon Running default provisioning handler Oct 30 23:55:47.587631 waagent[1877]: 2025-10-30T23:55:47.587565Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Oct 30 23:55:47.601352 waagent[1877]: 2025-10-30T23:55:47.601287Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Oct 30 23:55:47.610857 waagent[1877]: 2025-10-30T23:55:47.610804Z INFO Daemon Daemon cloud-init is enabled: False Oct 30 23:55:47.616211 waagent[1877]: 2025-10-30T23:55:47.616163Z INFO Daemon Daemon Copying ovf-env.xml Oct 30 23:55:47.759056 waagent[1877]: 2025-10-30T23:55:47.758854Z INFO Daemon Daemon Successfully mounted dvd Oct 30 23:55:47.790421 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Oct 30 23:55:47.792102 waagent[1877]: 2025-10-30T23:55:47.791713Z INFO Daemon Daemon Detect protocol endpoint Oct 30 23:55:47.796972 waagent[1877]: 2025-10-30T23:55:47.796911Z INFO Daemon Daemon Clean protocol and wireserver endpoint Oct 30 23:55:47.802685 waagent[1877]: 2025-10-30T23:55:47.802633Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Oct 30 23:55:47.809393 waagent[1877]: 2025-10-30T23:55:47.809343Z INFO Daemon Daemon Test for route to 168.63.129.16 Oct 30 23:55:47.815106 waagent[1877]: 2025-10-30T23:55:47.815056Z INFO Daemon Daemon Route to 168.63.129.16 exists Oct 30 23:55:47.820166 waagent[1877]: 2025-10-30T23:55:47.820119Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Oct 30 23:55:47.878399 waagent[1877]: 2025-10-30T23:55:47.878356Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Oct 30 23:55:47.885199 waagent[1877]: 2025-10-30T23:55:47.885170Z INFO Daemon Daemon Wire protocol version:2012-11-30 Oct 30 23:55:47.890728 waagent[1877]: 2025-10-30T23:55:47.890677Z INFO Daemon Daemon Server preferred version:2015-04-05 Oct 30 23:55:48.195924 waagent[1877]: 2025-10-30T23:55:48.195121Z INFO Daemon Daemon Initializing goal state during protocol detection Oct 30 23:55:48.201758 waagent[1877]: 2025-10-30T23:55:48.201692Z INFO Daemon Daemon Forcing an update of the goal state. Oct 30 23:55:48.212769 waagent[1877]: 2025-10-30T23:55:48.212719Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Oct 30 23:55:49.230876 waagent[1877]: 2025-10-30T23:55:49.230819Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Oct 30 23:55:49.238726 waagent[1877]: 2025-10-30T23:55:49.238675Z INFO Daemon Oct 30 23:55:49.242023 waagent[1877]: 2025-10-30T23:55:49.241976Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: bc55be92-38fb-4849-890c-1932e0e434da eTag: 8229941543724947396 source: Fabric] Oct 30 23:55:49.255058 waagent[1877]: 2025-10-30T23:55:49.255009Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Oct 30 23:55:49.262567 waagent[1877]: 2025-10-30T23:55:49.262511Z INFO Daemon Oct 30 23:55:49.265801 waagent[1877]: 2025-10-30T23:55:49.265757Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Oct 30 23:55:49.277448 waagent[1877]: 2025-10-30T23:55:49.277411Z INFO Daemon Daemon Downloading artifacts profile blob Oct 30 23:55:49.355220 waagent[1877]: 2025-10-30T23:55:49.355128Z INFO Daemon Downloaded certificate {'thumbprint': '884774EF103CB8619005FDF7B5B085A7271CAE2A', 'hasPrivateKey': True} Oct 30 23:55:49.366665 waagent[1877]: 2025-10-30T23:55:49.366610Z INFO Daemon Fetch goal state completed Oct 30 23:55:49.379011 waagent[1877]: 2025-10-30T23:55:49.378957Z INFO Daemon Daemon Starting provisioning Oct 30 23:55:49.384711 waagent[1877]: 2025-10-30T23:55:49.384643Z INFO Daemon Daemon Handle ovf-env.xml. Oct 30 23:55:49.389561 waagent[1877]: 2025-10-30T23:55:49.389508Z INFO Daemon Daemon Set hostname [ci-4230.2.4-n-0164ad71e3] Oct 30 23:55:49.887212 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 30 23:55:49.895063 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 23:55:51.917089 waagent[1877]: 2025-10-30T23:55:51.917019Z INFO Daemon Daemon Publish hostname [ci-4230.2.4-n-0164ad71e3] Oct 30 23:55:51.928902 waagent[1877]: 2025-10-30T23:55:51.923992Z INFO Daemon Daemon Examine /proc/net/route for primary interface Oct 30 23:55:51.930430 waagent[1877]: 2025-10-30T23:55:51.930373Z INFO Daemon Daemon Primary interface is [eth0] Oct 30 23:55:52.016729 systemd-networkd[1341]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 30 23:55:52.016742 systemd-networkd[1341]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 30 23:55:52.016769 systemd-networkd[1341]: eth0: DHCP lease lost Oct 30 23:55:52.017747 waagent[1877]: 2025-10-30T23:55:52.017662Z INFO Daemon Daemon Create user account if not exists Oct 30 23:55:52.023304 waagent[1877]: 2025-10-30T23:55:52.023246Z INFO Daemon Daemon User core already exists, skip useradd Oct 30 23:55:52.029245 waagent[1877]: 2025-10-30T23:55:52.029128Z INFO Daemon Daemon Configure sudoer Oct 30 23:55:52.034296 waagent[1877]: 2025-10-30T23:55:52.034232Z INFO Daemon Daemon Configure sshd Oct 30 23:55:52.038778 waagent[1877]: 2025-10-30T23:55:52.038726Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Oct 30 23:55:52.052699 waagent[1877]: 2025-10-30T23:55:52.052609Z INFO Daemon Daemon Deploy ssh public key. Oct 30 23:55:52.061013 systemd-networkd[1341]: eth0: DHCPv4 address 10.200.20.15/24, gateway 10.200.20.1 acquired from 168.63.129.16 Oct 30 23:55:52.767906 waagent[1877]: 2025-10-30T23:55:52.766069Z INFO Daemon Daemon Provisioning complete Oct 30 23:55:52.784751 waagent[1877]: 2025-10-30T23:55:52.784701Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Oct 30 23:55:52.791313 waagent[1877]: 2025-10-30T23:55:52.791252Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Oct 30 23:55:52.802104 waagent[1877]: 2025-10-30T23:55:52.802045Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Oct 30 23:55:52.935744 waagent[1963]: 2025-10-30T23:55:52.935229Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Oct 30 23:55:52.935744 waagent[1963]: 2025-10-30T23:55:52.935377Z INFO ExtHandler ExtHandler OS: flatcar 4230.2.4 Oct 30 23:55:52.935744 waagent[1963]: 2025-10-30T23:55:52.935432Z INFO ExtHandler ExtHandler Python: 3.11.11 Oct 30 23:55:53.037277 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 23:55:53.040758 (kubelet)[1972]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 23:55:53.082418 kubelet[1972]: E1030 23:55:53.082343 1972 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 23:55:53.085420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 23:55:53.085563 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 23:55:53.086090 systemd[1]: kubelet.service: Consumed 134ms CPU time, 107.6M memory peak. Oct 30 23:55:53.148112 waagent[1963]: 2025-10-30T23:55:53.148014Z INFO ExtHandler ExtHandler Distro: flatcar-4230.2.4; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Oct 30 23:55:53.148317 waagent[1963]: 2025-10-30T23:55:53.148273Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Oct 30 23:55:53.148388 waagent[1963]: 2025-10-30T23:55:53.148355Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Oct 30 23:55:53.156717 waagent[1963]: 2025-10-30T23:55:53.156654Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Oct 30 23:55:53.162733 waagent[1963]: 2025-10-30T23:55:53.162689Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Oct 30 23:55:53.163272 waagent[1963]: 2025-10-30T23:55:53.163228Z INFO ExtHandler Oct 30 23:55:53.163348 waagent[1963]: 2025-10-30T23:55:53.163317Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 89bb721b-25f1-469d-b638-58e3d87dec8d eTag: 8229941543724947396 source: Fabric] Oct 30 23:55:53.163633 waagent[1963]: 2025-10-30T23:55:53.163593Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Oct 30 23:55:53.164241 waagent[1963]: 2025-10-30T23:55:53.164193Z INFO ExtHandler Oct 30 23:55:53.164309 waagent[1963]: 2025-10-30T23:55:53.164278Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Oct 30 23:55:53.168412 waagent[1963]: 2025-10-30T23:55:53.168378Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Oct 30 23:55:53.238938 waagent[1963]: 2025-10-30T23:55:53.238813Z INFO ExtHandler Downloaded certificate {'thumbprint': '884774EF103CB8619005FDF7B5B085A7271CAE2A', 'hasPrivateKey': True} Oct 30 23:55:53.239458 waagent[1963]: 2025-10-30T23:55:53.239408Z INFO ExtHandler Fetch goal state completed Oct 30 23:55:53.257502 waagent[1963]: 2025-10-30T23:55:53.257446Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1963 Oct 30 23:55:53.257664 waagent[1963]: 2025-10-30T23:55:53.257627Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Oct 30 23:55:53.259289 waagent[1963]: 2025-10-30T23:55:53.259243Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4230.2.4', '', 'Flatcar Container Linux by Kinvolk'] Oct 30 23:55:53.259650 waagent[1963]: 2025-10-30T23:55:53.259611Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Oct 30 23:55:53.311386 waagent[1963]: 2025-10-30T23:55:53.311283Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Oct 30 23:55:53.311526 waagent[1963]: 2025-10-30T23:55:53.311485Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Oct 30 23:55:53.317518 waagent[1963]: 2025-10-30T23:55:53.317059Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Oct 30 23:55:53.325497 systemd[1]: Reload requested from client PID 1988 ('systemctl') (unit waagent.service)... Oct 30 23:55:53.325624 systemd[1]: Reloading... Oct 30 23:55:53.419979 zram_generator::config[2028]: No configuration found. Oct 30 23:55:53.527727 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 30 23:55:53.629163 systemd[1]: Reloading finished in 303 ms. Oct 30 23:55:53.647747 waagent[1963]: 2025-10-30T23:55:53.646056Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Oct 30 23:55:53.652094 systemd[1]: Reload requested from client PID 2084 ('systemctl') (unit waagent.service)... Oct 30 23:55:53.652223 systemd[1]: Reloading... Oct 30 23:55:53.739001 zram_generator::config[2123]: No configuration found. Oct 30 23:55:53.845962 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 30 23:55:53.944532 systemd[1]: Reloading finished in 291 ms. Oct 30 23:55:53.961129 waagent[1963]: 2025-10-30T23:55:53.960184Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Oct 30 23:55:53.961129 waagent[1963]: 2025-10-30T23:55:53.960339Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Oct 30 23:55:54.419666 waagent[1963]: 2025-10-30T23:55:54.418467Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Oct 30 23:55:54.419666 waagent[1963]: 2025-10-30T23:55:54.419087Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Oct 30 23:55:54.419903 waagent[1963]: 2025-10-30T23:55:54.419837Z INFO ExtHandler ExtHandler Starting env monitor service. Oct 30 23:55:54.420004 waagent[1963]: 2025-10-30T23:55:54.419956Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Oct 30 23:55:54.420094 waagent[1963]: 2025-10-30T23:55:54.420061Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Oct 30 23:55:54.420315 waagent[1963]: 2025-10-30T23:55:54.420272Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Oct 30 23:55:54.420830 waagent[1963]: 2025-10-30T23:55:54.420772Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Oct 30 23:55:54.421012 waagent[1963]: 2025-10-30T23:55:54.420925Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Oct 30 23:55:54.421012 waagent[1963]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Oct 30 23:55:54.421012 waagent[1963]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Oct 30 23:55:54.421012 waagent[1963]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Oct 30 23:55:54.421012 waagent[1963]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Oct 30 23:55:54.421012 waagent[1963]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Oct 30 23:55:54.421012 waagent[1963]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Oct 30 23:55:54.421161 waagent[1963]: 2025-10-30T23:55:54.421062Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Oct 30 23:55:54.421161 waagent[1963]: 2025-10-30T23:55:54.421140Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Oct 30 23:55:54.421304 waagent[1963]: 2025-10-30T23:55:54.421263Z INFO EnvHandler ExtHandler Configure routes Oct 30 23:55:54.421366 waagent[1963]: 2025-10-30T23:55:54.421338Z INFO EnvHandler ExtHandler Gateway:None Oct 30 23:55:54.421416 waagent[1963]: 2025-10-30T23:55:54.421391Z INFO EnvHandler ExtHandler Routes:None Oct 30 23:55:54.421847 waagent[1963]: 2025-10-30T23:55:54.421791Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Oct 30 23:55:54.422647 waagent[1963]: 2025-10-30T23:55:54.422430Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Oct 30 23:55:54.422851 waagent[1963]: 2025-10-30T23:55:54.422801Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Oct 30 23:55:54.422967 waagent[1963]: 2025-10-30T23:55:54.422930Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Oct 30 23:55:54.423560 waagent[1963]: 2025-10-30T23:55:54.423518Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Oct 30 23:55:54.509329 waagent[1963]: 2025-10-30T23:55:54.509245Z INFO MonitorHandler ExtHandler Network interfaces: Oct 30 23:55:54.509329 waagent[1963]: Executing ['ip', '-a', '-o', 'link']: Oct 30 23:55:54.509329 waagent[1963]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Oct 30 23:55:54.509329 waagent[1963]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:c1:8c:80 brd ff:ff:ff:ff:ff:ff Oct 30 23:55:54.509329 waagent[1963]: 3: enP4013s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:c1:8c:80 brd ff:ff:ff:ff:ff:ff\ altname enP4013p0s2 Oct 30 23:55:54.509329 waagent[1963]: Executing ['ip', '-4', '-a', '-o', 'address']: Oct 30 23:55:54.509329 waagent[1963]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Oct 30 23:55:54.509329 waagent[1963]: 2: eth0 inet 10.200.20.15/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Oct 30 23:55:54.509329 waagent[1963]: Executing ['ip', '-6', '-a', '-o', 'address']: Oct 30 23:55:54.509329 waagent[1963]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Oct 30 23:55:54.509329 waagent[1963]: 2: eth0 inet6 fe80::222:48ff:fec1:8c80/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Oct 30 23:55:54.550564 waagent[1963]: 2025-10-30T23:55:54.550503Z INFO ExtHandler ExtHandler Oct 30 23:55:54.550661 waagent[1963]: 2025-10-30T23:55:54.550633Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 50629108-8d93-4f15-a715-28eba0870824 correlation e1a7dacc-eaff-4f0f-ada7-a0c72787bbec created: 2025-10-30T23:53:34.062065Z] Oct 30 23:55:54.551068 waagent[1963]: 2025-10-30T23:55:54.551019Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Oct 30 23:55:54.551692 waagent[1963]: 2025-10-30T23:55:54.551650Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Oct 30 23:55:54.567919 waagent[1963]: 2025-10-30T23:55:54.567168Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Oct 30 23:55:54.567919 waagent[1963]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Oct 30 23:55:54.567919 waagent[1963]: pkts bytes target prot opt in out source destination Oct 30 23:55:54.567919 waagent[1963]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Oct 30 23:55:54.567919 waagent[1963]: pkts bytes target prot opt in out source destination Oct 30 23:55:54.567919 waagent[1963]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Oct 30 23:55:54.567919 waagent[1963]: pkts bytes target prot opt in out source destination Oct 30 23:55:54.567919 waagent[1963]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Oct 30 23:55:54.567919 waagent[1963]: 6 2706 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Oct 30 23:55:54.567919 waagent[1963]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Oct 30 23:55:54.570326 waagent[1963]: 2025-10-30T23:55:54.570273Z INFO EnvHandler ExtHandler Current Firewall rules: Oct 30 23:55:54.570326 waagent[1963]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Oct 30 23:55:54.570326 waagent[1963]: pkts bytes target prot opt in out source destination Oct 30 23:55:54.570326 waagent[1963]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Oct 30 23:55:54.570326 waagent[1963]: pkts bytes target prot opt in out source destination Oct 30 23:55:54.570326 waagent[1963]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Oct 30 23:55:54.570326 waagent[1963]: pkts bytes target prot opt in out source destination Oct 30 23:55:54.570326 waagent[1963]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Oct 30 23:55:54.570326 waagent[1963]: 10 3292 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Oct 30 23:55:54.570326 waagent[1963]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Oct 30 23:55:54.570564 waagent[1963]: 2025-10-30T23:55:54.570525Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Oct 30 23:55:54.588821 waagent[1963]: 2025-10-30T23:55:54.588755Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 2384AF41-0D3E-4B66-85AD-6E27B56F5033;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Oct 30 23:56:00.495114 chronyd[1710]: Selected source PHC0 Oct 30 23:56:02.005815 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Oct 30 23:56:03.137332 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 30 23:56:03.145139 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 23:56:04.077025 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 23:56:04.081122 (kubelet)[2216]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 23:56:04.121373 kubelet[2216]: E1030 23:56:04.121294 2216 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 23:56:04.123228 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 23:56:04.123354 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 23:56:04.124031 systemd[1]: kubelet.service: Consumed 128ms CPU time, 107.5M memory peak. Oct 30 23:56:14.137383 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 30 23:56:14.152172 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 23:56:14.462747 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 23:56:14.466403 (kubelet)[2230]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 23:56:14.503920 kubelet[2230]: E1030 23:56:14.503443 2230 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 23:56:14.505704 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 23:56:14.505850 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 23:56:14.506351 systemd[1]: kubelet.service: Consumed 128ms CPU time, 106.8M memory peak. Oct 30 23:56:20.750765 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 30 23:56:20.759116 systemd[1]: Started sshd@0-10.200.20.15:22-10.200.16.10:39378.service - OpenSSH per-connection server daemon (10.200.16.10:39378). Oct 30 23:56:21.403262 sshd[2239]: Accepted publickey for core from 10.200.16.10 port 39378 ssh2: RSA SHA256:nBkLfspKkDLqOT9SkkASHbt5c8U+GcTwmvPM6OoKUzI Oct 30 23:56:21.404553 sshd-session[2239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:56:21.408988 systemd-logind[1718]: New session 3 of user core. Oct 30 23:56:21.415026 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 30 23:56:21.828199 systemd[1]: Started sshd@1-10.200.20.15:22-10.200.16.10:39384.service - OpenSSH per-connection server daemon (10.200.16.10:39384). Oct 30 23:56:22.283656 sshd[2244]: Accepted publickey for core from 10.200.16.10 port 39384 ssh2: RSA SHA256:nBkLfspKkDLqOT9SkkASHbt5c8U+GcTwmvPM6OoKUzI Oct 30 23:56:22.284929 sshd-session[2244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:56:22.290636 systemd-logind[1718]: New session 4 of user core. Oct 30 23:56:22.296124 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 30 23:56:22.617153 sshd[2246]: Connection closed by 10.200.16.10 port 39384 Oct 30 23:56:22.617619 sshd-session[2244]: pam_unix(sshd:session): session closed for user core Oct 30 23:56:22.620212 systemd[1]: sshd@1-10.200.20.15:22-10.200.16.10:39384.service: Deactivated successfully. Oct 30 23:56:22.622001 systemd[1]: session-4.scope: Deactivated successfully. Oct 30 23:56:22.624309 systemd-logind[1718]: Session 4 logged out. Waiting for processes to exit. Oct 30 23:56:22.625241 systemd-logind[1718]: Removed session 4. Oct 30 23:56:22.690236 systemd[1]: Started sshd@2-10.200.20.15:22-10.200.16.10:39396.service - OpenSSH per-connection server daemon (10.200.16.10:39396). Oct 30 23:56:23.109343 sshd[2252]: Accepted publickey for core from 10.200.16.10 port 39396 ssh2: RSA SHA256:nBkLfspKkDLqOT9SkkASHbt5c8U+GcTwmvPM6OoKUzI Oct 30 23:56:23.110559 sshd-session[2252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:56:23.115944 systemd-logind[1718]: New session 5 of user core. Oct 30 23:56:23.122076 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 30 23:56:23.447613 sshd[2254]: Connection closed by 10.200.16.10 port 39396 Oct 30 23:56:23.447475 sshd-session[2252]: pam_unix(sshd:session): session closed for user core Oct 30 23:56:23.451343 systemd[1]: sshd@2-10.200.20.15:22-10.200.16.10:39396.service: Deactivated successfully. Oct 30 23:56:23.452843 systemd[1]: session-5.scope: Deactivated successfully. Oct 30 23:56:23.454298 systemd-logind[1718]: Session 5 logged out. Waiting for processes to exit. Oct 30 23:56:23.455297 systemd-logind[1718]: Removed session 5. Oct 30 23:56:23.526075 systemd[1]: Started sshd@3-10.200.20.15:22-10.200.16.10:39410.service - OpenSSH per-connection server daemon (10.200.16.10:39410). Oct 30 23:56:23.590827 update_engine[1721]: I20251030 23:56:23.590313 1721 update_attempter.cc:509] Updating boot flags... Oct 30 23:56:23.676112 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2277) Oct 30 23:56:23.859037 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2280) Oct 30 23:56:23.964918 sshd[2260]: Accepted publickey for core from 10.200.16.10 port 39410 ssh2: RSA SHA256:nBkLfspKkDLqOT9SkkASHbt5c8U+GcTwmvPM6OoKUzI Oct 30 23:56:23.965765 sshd-session[2260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:56:23.970858 systemd-logind[1718]: New session 6 of user core. Oct 30 23:56:23.976050 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 30 23:56:24.279613 sshd[2364]: Connection closed by 10.200.16.10 port 39410 Oct 30 23:56:24.279099 sshd-session[2260]: pam_unix(sshd:session): session closed for user core Oct 30 23:56:24.282592 systemd[1]: sshd@3-10.200.20.15:22-10.200.16.10:39410.service: Deactivated successfully. Oct 30 23:56:24.284435 systemd[1]: session-6.scope: Deactivated successfully. Oct 30 23:56:24.285267 systemd-logind[1718]: Session 6 logged out. Waiting for processes to exit. Oct 30 23:56:24.286148 systemd-logind[1718]: Removed session 6. Oct 30 23:56:24.368208 systemd[1]: Started sshd@4-10.200.20.15:22-10.200.16.10:39414.service - OpenSSH per-connection server daemon (10.200.16.10:39414). Oct 30 23:56:24.637218 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 30 23:56:24.646119 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 23:56:24.825115 sshd[2382]: Accepted publickey for core from 10.200.16.10 port 39414 ssh2: RSA SHA256:nBkLfspKkDLqOT9SkkASHbt5c8U+GcTwmvPM6OoKUzI Oct 30 23:56:24.826385 sshd-session[2382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:56:24.831869 systemd-logind[1718]: New session 7 of user core. Oct 30 23:56:24.841165 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 30 23:56:24.855594 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 23:56:24.866219 (kubelet)[2393]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 23:56:24.904540 kubelet[2393]: E1030 23:56:24.904396 2393 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 23:56:24.906986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 23:56:24.907236 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 23:56:24.907725 systemd[1]: kubelet.service: Consumed 130ms CPU time, 107.4M memory peak. Oct 30 23:56:29.266912 sudo[2400]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 30 23:56:29.267197 sudo[2400]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 23:56:29.329662 sudo[2400]: pam_unix(sudo:session): session closed for user root Oct 30 23:56:29.400710 sshd[2389]: Connection closed by 10.200.16.10 port 39414 Oct 30 23:56:29.399980 sshd-session[2382]: pam_unix(sshd:session): session closed for user core Oct 30 23:56:29.403548 systemd-logind[1718]: Session 7 logged out. Waiting for processes to exit. Oct 30 23:56:29.403991 systemd[1]: sshd@4-10.200.20.15:22-10.200.16.10:39414.service: Deactivated successfully. Oct 30 23:56:29.405446 systemd[1]: session-7.scope: Deactivated successfully. Oct 30 23:56:29.407230 systemd-logind[1718]: Removed session 7. Oct 30 23:56:29.490190 systemd[1]: Started sshd@5-10.200.20.15:22-10.200.16.10:39422.service - OpenSSH per-connection server daemon (10.200.16.10:39422). Oct 30 23:56:29.948508 sshd[2406]: Accepted publickey for core from 10.200.16.10 port 39422 ssh2: RSA SHA256:nBkLfspKkDLqOT9SkkASHbt5c8U+GcTwmvPM6OoKUzI Oct 30 23:56:29.949794 sshd-session[2406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:56:29.954929 systemd-logind[1718]: New session 8 of user core. Oct 30 23:56:29.960090 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 30 23:56:30.207809 sudo[2410]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 30 23:56:30.208619 sudo[2410]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 23:56:30.211789 sudo[2410]: pam_unix(sudo:session): session closed for user root Oct 30 23:56:30.216439 sudo[2409]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 30 23:56:30.216689 sudo[2409]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 23:56:30.233197 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 30 23:56:30.255441 augenrules[2432]: No rules Oct 30 23:56:30.256819 systemd[1]: audit-rules.service: Deactivated successfully. Oct 30 23:56:30.257064 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 30 23:56:30.260081 sudo[2409]: pam_unix(sudo:session): session closed for user root Oct 30 23:56:30.335982 sshd[2408]: Connection closed by 10.200.16.10 port 39422 Oct 30 23:56:30.336639 sshd-session[2406]: pam_unix(sshd:session): session closed for user core Oct 30 23:56:30.340317 systemd[1]: sshd@5-10.200.20.15:22-10.200.16.10:39422.service: Deactivated successfully. Oct 30 23:56:30.341793 systemd[1]: session-8.scope: Deactivated successfully. Oct 30 23:56:30.342472 systemd-logind[1718]: Session 8 logged out. Waiting for processes to exit. Oct 30 23:56:30.343455 systemd-logind[1718]: Removed session 8. Oct 30 23:56:30.412068 systemd[1]: Started sshd@6-10.200.20.15:22-10.200.16.10:53320.service - OpenSSH per-connection server daemon (10.200.16.10:53320). Oct 30 23:56:30.831720 sshd[2441]: Accepted publickey for core from 10.200.16.10 port 53320 ssh2: RSA SHA256:nBkLfspKkDLqOT9SkkASHbt5c8U+GcTwmvPM6OoKUzI Oct 30 23:56:30.833014 sshd-session[2441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:56:30.838373 systemd-logind[1718]: New session 9 of user core. Oct 30 23:56:30.844133 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 30 23:56:31.069530 sudo[2444]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 30 23:56:31.069800 sudo[2444]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 23:56:32.699134 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 30 23:56:32.700727 (dockerd)[2462]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 30 23:56:34.094911 dockerd[2462]: time="2025-10-30T23:56:34.094307728Z" level=info msg="Starting up" Oct 30 23:56:35.137282 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Oct 30 23:56:35.145060 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 23:56:36.414312 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 23:56:36.418050 (kubelet)[2490]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 23:56:36.455007 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 23:56:40.008587 kubelet[2490]: E1030 23:56:36.453376 2490 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 23:56:36.455130 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 23:56:36.455398 systemd[1]: kubelet.service: Consumed 124ms CPU time, 109M memory peak. Oct 30 23:56:40.162698 dockerd[2462]: time="2025-10-30T23:56:40.162653884Z" level=info msg="Loading containers: start." Oct 30 23:56:40.476915 kernel: Initializing XFRM netlink socket Oct 30 23:56:40.680277 systemd-networkd[1341]: docker0: Link UP Oct 30 23:56:40.738167 dockerd[2462]: time="2025-10-30T23:56:40.738059088Z" level=info msg="Loading containers: done." Oct 30 23:56:41.444139 dockerd[2462]: time="2025-10-30T23:56:41.444061874Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 30 23:56:41.444500 dockerd[2462]: time="2025-10-30T23:56:41.444207114Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Oct 30 23:56:41.444500 dockerd[2462]: time="2025-10-30T23:56:41.444335834Z" level=info msg="Daemon has completed initialization" Oct 30 23:56:41.627488 dockerd[2462]: time="2025-10-30T23:56:41.627027325Z" level=info msg="API listen on /run/docker.sock" Oct 30 23:56:41.627505 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 30 23:56:42.623877 containerd[1743]: time="2025-10-30T23:56:42.623609443Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Oct 30 23:56:44.580504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2990574527.mount: Deactivated successfully. Oct 30 23:56:46.637216 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Oct 30 23:56:46.647110 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 23:56:46.764691 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 23:56:46.775222 (kubelet)[2684]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 23:56:46.810143 kubelet[2684]: E1030 23:56:46.810080 2684 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 23:56:46.812583 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 23:56:46.812718 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 23:56:46.814958 systemd[1]: kubelet.service: Consumed 132ms CPU time, 105.5M memory peak. Oct 30 23:56:54.396921 containerd[1743]: time="2025-10-30T23:56:54.396011893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:56:54.399697 containerd[1743]: time="2025-10-30T23:56:54.399465463Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=26363685" Oct 30 23:56:54.403451 containerd[1743]: time="2025-10-30T23:56:54.403399914Z" level=info msg="ImageCreate event name:\"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:56:54.409472 containerd[1743]: time="2025-10-30T23:56:54.409415490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:56:54.410653 containerd[1743]: time="2025-10-30T23:56:54.410479173Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"26360284\" in 11.78682561s" Oct 30 23:56:54.410653 containerd[1743]: time="2025-10-30T23:56:54.410518773Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Oct 30 23:56:54.411429 containerd[1743]: time="2025-10-30T23:56:54.411177535Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Oct 30 23:56:56.235933 containerd[1743]: time="2025-10-30T23:56:56.234926498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:56:56.239042 containerd[1743]: time="2025-10-30T23:56:56.238997669Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=22531200" Oct 30 23:56:56.242804 containerd[1743]: time="2025-10-30T23:56:56.242759960Z" level=info msg="ImageCreate event name:\"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:56:56.248444 containerd[1743]: time="2025-10-30T23:56:56.248395375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:56:56.249443 containerd[1743]: time="2025-10-30T23:56:56.249407138Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"24099975\" in 1.838196403s" Oct 30 23:56:56.249443 containerd[1743]: time="2025-10-30T23:56:56.249440658Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Oct 30 23:56:56.250505 containerd[1743]: time="2025-10-30T23:56:56.250461901Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Oct 30 23:56:56.887452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Oct 30 23:56:56.895064 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 23:56:57.000111 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 23:56:57.012209 (kubelet)[2746]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 23:56:57.116235 kubelet[2746]: E1030 23:56:57.116176 2746 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 23:56:57.118616 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 23:56:57.118874 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 23:56:57.119260 systemd[1]: kubelet.service: Consumed 138ms CPU time, 107.1M memory peak. Oct 30 23:57:07.137434 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Oct 30 23:57:07.151113 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 23:57:08.233528 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 23:57:08.238004 (kubelet)[2765]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 23:57:08.277546 kubelet[2765]: E1030 23:57:08.277490 2765 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 23:57:08.280024 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 23:57:08.280291 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 23:57:08.280787 systemd[1]: kubelet.service: Consumed 130ms CPU time, 107M memory peak. Oct 30 23:57:13.326400 containerd[1743]: time="2025-10-30T23:57:13.326337395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:57:13.330958 containerd[1743]: time="2025-10-30T23:57:13.330899647Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=17484324" Oct 30 23:57:13.341400 containerd[1743]: time="2025-10-30T23:57:13.341306993Z" level=info msg="ImageCreate event name:\"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:57:13.348679 containerd[1743]: time="2025-10-30T23:57:13.348301531Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:57:13.349445 containerd[1743]: time="2025-10-30T23:57:13.349410014Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"19053117\" in 17.098907233s" Oct 30 23:57:13.349445 containerd[1743]: time="2025-10-30T23:57:13.349444054Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Oct 30 23:57:13.350676 containerd[1743]: time="2025-10-30T23:57:13.350654577Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Oct 30 23:57:14.773141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1795769526.mount: Deactivated successfully. Oct 30 23:57:15.117803 containerd[1743]: time="2025-10-30T23:57:15.117674730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:57:15.124511 containerd[1743]: time="2025-10-30T23:57:15.124291026Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=27417817" Oct 30 23:57:15.128644 containerd[1743]: time="2025-10-30T23:57:15.128592917Z" level=info msg="ImageCreate event name:\"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:57:15.133808 containerd[1743]: time="2025-10-30T23:57:15.133637570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:57:15.134509 containerd[1743]: time="2025-10-30T23:57:15.134264611Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"27416836\" in 1.783401034s" Oct 30 23:57:15.134509 containerd[1743]: time="2025-10-30T23:57:15.134298172Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Oct 30 23:57:15.135100 containerd[1743]: time="2025-10-30T23:57:15.134745653Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Oct 30 23:57:15.911211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3113819036.mount: Deactivated successfully. Oct 30 23:57:17.454417 containerd[1743]: time="2025-10-30T23:57:17.454359924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:57:17.513915 containerd[1743]: time="2025-10-30T23:57:17.513840755Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Oct 30 23:57:17.553056 containerd[1743]: time="2025-10-30T23:57:17.552984174Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:57:17.561713 containerd[1743]: time="2025-10-30T23:57:17.561654436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:57:17.562821 containerd[1743]: time="2025-10-30T23:57:17.562480038Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.427707225s" Oct 30 23:57:17.562821 containerd[1743]: time="2025-10-30T23:57:17.562518558Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Oct 30 23:57:17.563171 containerd[1743]: time="2025-10-30T23:57:17.563098839Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 30 23:57:18.387260 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Oct 30 23:57:18.392069 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 23:57:22.707634 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 23:57:22.711226 (kubelet)[2839]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 23:57:22.742983 kubelet[2839]: E1030 23:57:22.742915 2839 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 23:57:22.745391 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 23:57:22.745538 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 23:57:22.746175 systemd[1]: kubelet.service: Consumed 123ms CPU time, 106.8M memory peak. Oct 30 23:57:24.564862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2725848026.mount: Deactivated successfully. Oct 30 23:57:24.759457 containerd[1743]: time="2025-10-30T23:57:24.759411943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:57:24.804493 containerd[1743]: time="2025-10-30T23:57:24.804432913Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Oct 30 23:57:24.808616 containerd[1743]: time="2025-10-30T23:57:24.808570441Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:57:24.858465 containerd[1743]: time="2025-10-30T23:57:24.858320500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:57:24.859236 containerd[1743]: time="2025-10-30T23:57:24.859113422Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 7.295983222s" Oct 30 23:57:24.859236 containerd[1743]: time="2025-10-30T23:57:24.859145222Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Oct 30 23:57:24.859924 containerd[1743]: time="2025-10-30T23:57:24.859676863Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Oct 30 23:57:26.363209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3093719796.mount: Deactivated successfully. Oct 30 23:57:32.887344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Oct 30 23:57:32.896070 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 23:57:34.736247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 23:57:34.746326 (kubelet)[2878]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 23:57:34.785502 kubelet[2878]: E1030 23:57:34.785445 2878 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 23:57:34.788509 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 23:57:34.788637 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 23:57:34.790966 systemd[1]: kubelet.service: Consumed 133ms CPU time, 107M memory peak. Oct 30 23:57:36.491954 containerd[1743]: time="2025-10-30T23:57:36.491895934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:57:36.495347 containerd[1743]: time="2025-10-30T23:57:36.495006062Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Oct 30 23:57:36.499473 containerd[1743]: time="2025-10-30T23:57:36.499420313Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:57:36.505551 containerd[1743]: time="2025-10-30T23:57:36.505474128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:57:36.506906 containerd[1743]: time="2025-10-30T23:57:36.506756891Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 11.647047428s" Oct 30 23:57:36.506906 containerd[1743]: time="2025-10-30T23:57:36.506790171Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Oct 30 23:57:42.529926 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 23:57:42.530431 systemd[1]: kubelet.service: Consumed 133ms CPU time, 107M memory peak. Oct 30 23:57:42.543302 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 23:57:42.579625 systemd[1]: Reload requested from client PID 2945 ('systemctl') (unit session-9.scope)... Oct 30 23:57:42.579782 systemd[1]: Reloading... Oct 30 23:57:42.693928 zram_generator::config[2995]: No configuration found. Oct 30 23:57:42.792686 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 30 23:57:42.895056 systemd[1]: Reloading finished in 314 ms. Oct 30 23:57:42.948558 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 23:57:42.953215 (kubelet)[3051]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 30 23:57:42.958134 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 23:57:42.959415 systemd[1]: kubelet.service: Deactivated successfully. Oct 30 23:57:42.959659 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 23:57:42.959719 systemd[1]: kubelet.service: Consumed 87ms CPU time, 96.6M memory peak. Oct 30 23:57:42.965158 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 23:57:50.217478 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 23:57:50.221293 (kubelet)[3068]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 30 23:57:50.255791 kubelet[3068]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 23:57:50.255791 kubelet[3068]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 30 23:57:50.255791 kubelet[3068]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 23:57:50.256171 kubelet[3068]: I1030 23:57:50.255846 3068 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 30 23:57:51.643899 kubelet[3068]: I1030 23:57:51.643844 3068 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 30 23:57:51.643899 kubelet[3068]: I1030 23:57:51.643901 3068 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 30 23:57:51.644291 kubelet[3068]: I1030 23:57:51.644167 3068 server.go:954] "Client rotation is on, will bootstrap in background" Oct 30 23:57:51.668096 kubelet[3068]: E1030 23:57:51.668021 3068 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" Oct 30 23:57:51.670236 kubelet[3068]: I1030 23:57:51.669998 3068 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 30 23:57:51.678667 kubelet[3068]: E1030 23:57:51.678620 3068 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 30 23:57:51.678667 kubelet[3068]: I1030 23:57:51.678662 3068 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 30 23:57:51.681526 kubelet[3068]: I1030 23:57:51.681502 3068 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 30 23:57:51.682368 kubelet[3068]: I1030 23:57:51.682327 3068 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 30 23:57:51.682548 kubelet[3068]: I1030 23:57:51.682371 3068 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.4-n-0164ad71e3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 30 23:57:51.682676 kubelet[3068]: I1030 23:57:51.682551 3068 topology_manager.go:138] "Creating topology manager with none policy" Oct 30 23:57:51.682676 kubelet[3068]: I1030 23:57:51.682560 3068 container_manager_linux.go:304] "Creating device plugin manager" Oct 30 23:57:51.682725 kubelet[3068]: I1030 23:57:51.682692 3068 state_mem.go:36] "Initialized new in-memory state store" Oct 30 23:57:51.685712 kubelet[3068]: I1030 23:57:51.685692 3068 kubelet.go:446] "Attempting to sync node with API server" Oct 30 23:57:51.685764 kubelet[3068]: I1030 23:57:51.685719 3068 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 30 23:57:51.685764 kubelet[3068]: I1030 23:57:51.685737 3068 kubelet.go:352] "Adding apiserver pod source" Oct 30 23:57:51.685764 kubelet[3068]: I1030 23:57:51.685749 3068 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 30 23:57:51.688823 kubelet[3068]: W1030 23:57:51.688776 3068 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.15:6443: connect: connection refused Oct 30 23:57:51.688904 kubelet[3068]: E1030 23:57:51.688843 3068 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" Oct 30 23:57:51.689182 kubelet[3068]: W1030 23:57:51.689131 3068 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.4-n-0164ad71e3&limit=500&resourceVersion=0": dial tcp 10.200.20.15:6443: connect: connection refused Oct 30 23:57:51.689239 kubelet[3068]: E1030 23:57:51.689188 3068 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.4-n-0164ad71e3&limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" Oct 30 23:57:51.689459 kubelet[3068]: I1030 23:57:51.689438 3068 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Oct 30 23:57:51.689963 kubelet[3068]: I1030 23:57:51.689942 3068 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 30 23:57:51.690019 kubelet[3068]: W1030 23:57:51.690003 3068 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 30 23:57:51.691548 kubelet[3068]: I1030 23:57:51.691507 3068 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 30 23:57:51.691548 kubelet[3068]: I1030 23:57:51.691550 3068 server.go:1287] "Started kubelet" Oct 30 23:57:51.693975 kubelet[3068]: I1030 23:57:51.693938 3068 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 30 23:57:51.694420 kubelet[3068]: I1030 23:57:51.694361 3068 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 30 23:57:51.694723 kubelet[3068]: I1030 23:57:51.694691 3068 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 30 23:57:51.694999 kubelet[3068]: E1030 23:57:51.694863 3068 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.15:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.15:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.2.4-n-0164ad71e3.18736a38dd1023fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.4-n-0164ad71e3,UID:ci-4230.2.4-n-0164ad71e3,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.2.4-n-0164ad71e3,},FirstTimestamp:2025-10-30 23:57:51.691531262 +0000 UTC m=+1.467463702,LastTimestamp:2025-10-30 23:57:51.691531262 +0000 UTC m=+1.467463702,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.4-n-0164ad71e3,}" Oct 30 23:57:51.695337 kubelet[3068]: I1030 23:57:51.695321 3068 server.go:479] "Adding debug handlers to kubelet server" Oct 30 23:57:51.697852 kubelet[3068]: E1030 23:57:51.697821 3068 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 30 23:57:51.700100 kubelet[3068]: I1030 23:57:51.700062 3068 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 30 23:57:51.700854 kubelet[3068]: I1030 23:57:51.700830 3068 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 30 23:57:51.704456 kubelet[3068]: E1030 23:57:51.704429 3068 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.4-n-0164ad71e3\" not found" Oct 30 23:57:51.704604 kubelet[3068]: I1030 23:57:51.704593 3068 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 30 23:57:51.705028 kubelet[3068]: I1030 23:57:51.704850 3068 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 30 23:57:51.705028 kubelet[3068]: I1030 23:57:51.704933 3068 reconciler.go:26] "Reconciler: start to sync state" Oct 30 23:57:51.705677 kubelet[3068]: W1030 23:57:51.705640 3068 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.15:6443: connect: connection refused Oct 30 23:57:51.705867 kubelet[3068]: E1030 23:57:51.705847 3068 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" Oct 30 23:57:51.706409 kubelet[3068]: I1030 23:57:51.706137 3068 factory.go:221] Registration of the systemd container factory successfully Oct 30 23:57:51.706409 kubelet[3068]: I1030 23:57:51.706233 3068 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 30 23:57:51.708509 kubelet[3068]: E1030 23:57:51.708438 3068 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.4-n-0164ad71e3?timeout=10s\": dial tcp 10.200.20.15:6443: connect: connection refused" interval="200ms" Oct 30 23:57:51.708662 kubelet[3068]: I1030 23:57:51.708621 3068 factory.go:221] Registration of the containerd container factory successfully Oct 30 23:57:51.724407 kubelet[3068]: I1030 23:57:51.724376 3068 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 30 23:57:51.724407 kubelet[3068]: I1030 23:57:51.724397 3068 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 30 23:57:51.724546 kubelet[3068]: I1030 23:57:51.724417 3068 state_mem.go:36] "Initialized new in-memory state store" Oct 30 23:57:51.798612 kubelet[3068]: I1030 23:57:51.798577 3068 policy_none.go:49] "None policy: Start" Oct 30 23:57:51.798612 kubelet[3068]: I1030 23:57:51.798612 3068 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 30 23:57:51.798612 kubelet[3068]: I1030 23:57:51.798626 3068 state_mem.go:35] "Initializing new in-memory state store" Oct 30 23:57:51.805040 kubelet[3068]: E1030 23:57:51.805013 3068 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.4-n-0164ad71e3\" not found" Oct 30 23:57:51.867669 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 30 23:57:51.875835 kubelet[3068]: I1030 23:57:51.875786 3068 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 30 23:57:51.878385 kubelet[3068]: I1030 23:57:51.878265 3068 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 30 23:57:51.878487 kubelet[3068]: I1030 23:57:51.878392 3068 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 30 23:57:51.878487 kubelet[3068]: I1030 23:57:51.878416 3068 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 30 23:57:51.878487 kubelet[3068]: I1030 23:57:51.878422 3068 kubelet.go:2382] "Starting kubelet main sync loop" Oct 30 23:57:51.878487 kubelet[3068]: E1030 23:57:51.878478 3068 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 30 23:57:51.880456 kubelet[3068]: W1030 23:57:51.880046 3068 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.15:6443: connect: connection refused Oct 30 23:57:51.880456 kubelet[3068]: E1030 23:57:51.880088 3068 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" Oct 30 23:57:51.882448 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 30 23:57:51.887080 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 30 23:57:51.899580 kubelet[3068]: I1030 23:57:51.898692 3068 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 30 23:57:51.900080 kubelet[3068]: I1030 23:57:51.899860 3068 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 30 23:57:51.900396 kubelet[3068]: I1030 23:57:51.900286 3068 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 30 23:57:51.901478 kubelet[3068]: I1030 23:57:51.900719 3068 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 30 23:57:51.901963 kubelet[3068]: E1030 23:57:51.901942 3068 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 30 23:57:51.902246 kubelet[3068]: E1030 23:57:51.902233 3068 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.2.4-n-0164ad71e3\" not found" Oct 30 23:57:51.909572 kubelet[3068]: E1030 23:57:51.909537 3068 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.4-n-0164ad71e3?timeout=10s\": dial tcp 10.200.20.15:6443: connect: connection refused" interval="400ms" Oct 30 23:57:51.989082 systemd[1]: Created slice kubepods-burstable-pod9b16a3892325f64f6119aebf5fc65edd.slice - libcontainer container kubepods-burstable-pod9b16a3892325f64f6119aebf5fc65edd.slice. Oct 30 23:57:52.001763 kubelet[3068]: E1030 23:57:52.001668 3068 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.4-n-0164ad71e3\" not found" node="ci-4230.2.4-n-0164ad71e3" Oct 30 23:57:52.003277 kubelet[3068]: I1030 23:57:52.002942 3068 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.4-n-0164ad71e3" Oct 30 23:57:52.003532 kubelet[3068]: E1030 23:57:52.003500 3068 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.15:6443/api/v1/nodes\": dial tcp 10.200.20.15:6443: connect: connection refused" node="ci-4230.2.4-n-0164ad71e3" Oct 30 23:57:52.004020 systemd[1]: Created slice kubepods-burstable-pod37a7d6c02cef60570d9ed1c7fdf99241.slice - libcontainer container kubepods-burstable-pod37a7d6c02cef60570d9ed1c7fdf99241.slice. Oct 30 23:57:52.005373 kubelet[3068]: I1030 23:57:52.005310 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b16a3892325f64f6119aebf5fc65edd-ca-certs\") pod \"kube-apiserver-ci-4230.2.4-n-0164ad71e3\" (UID: \"9b16a3892325f64f6119aebf5fc65edd\") " pod="kube-system/kube-apiserver-ci-4230.2.4-n-0164ad71e3" Oct 30 23:57:52.006316 kubelet[3068]: I1030 23:57:52.005941 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b16a3892325f64f6119aebf5fc65edd-k8s-certs\") pod \"kube-apiserver-ci-4230.2.4-n-0164ad71e3\" (UID: \"9b16a3892325f64f6119aebf5fc65edd\") " pod="kube-system/kube-apiserver-ci-4230.2.4-n-0164ad71e3" Oct 30 23:57:52.006678 kubelet[3068]: E1030 23:57:52.006636 3068 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.4-n-0164ad71e3\" not found" node="ci-4230.2.4-n-0164ad71e3" Oct 30 23:57:52.009393 systemd[1]: Created slice kubepods-burstable-pod1ce042869476843fe99a9fb3d82c2350.slice - libcontainer container kubepods-burstable-pod1ce042869476843fe99a9fb3d82c2350.slice. Oct 30 23:57:52.010839 kubelet[3068]: E1030 23:57:52.010811 3068 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.4-n-0164ad71e3\" not found" node="ci-4230.2.4-n-0164ad71e3" Oct 30 23:57:52.106225 kubelet[3068]: I1030 23:57:52.106185 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b16a3892325f64f6119aebf5fc65edd-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.4-n-0164ad71e3\" (UID: \"9b16a3892325f64f6119aebf5fc65edd\") " pod="kube-system/kube-apiserver-ci-4230.2.4-n-0164ad71e3" Oct 30 23:57:52.106225 kubelet[3068]: I1030 23:57:52.106229 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/37a7d6c02cef60570d9ed1c7fdf99241-ca-certs\") pod \"kube-controller-manager-ci-4230.2.4-n-0164ad71e3\" (UID: \"37a7d6c02cef60570d9ed1c7fdf99241\") " pod="kube-system/kube-controller-manager-ci-4230.2.4-n-0164ad71e3" Oct 30 23:57:52.106333 kubelet[3068]: I1030 23:57:52.106252 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/37a7d6c02cef60570d9ed1c7fdf99241-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.4-n-0164ad71e3\" (UID: \"37a7d6c02cef60570d9ed1c7fdf99241\") " pod="kube-system/kube-controller-manager-ci-4230.2.4-n-0164ad71e3" Oct 30 23:57:52.106333 kubelet[3068]: I1030 23:57:52.106269 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1ce042869476843fe99a9fb3d82c2350-kubeconfig\") pod \"kube-scheduler-ci-4230.2.4-n-0164ad71e3\" (UID: \"1ce042869476843fe99a9fb3d82c2350\") " pod="kube-system/kube-scheduler-ci-4230.2.4-n-0164ad71e3" Oct 30 23:57:52.106333 kubelet[3068]: I1030 23:57:52.106284 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/37a7d6c02cef60570d9ed1c7fdf99241-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.4-n-0164ad71e3\" (UID: \"37a7d6c02cef60570d9ed1c7fdf99241\") " pod="kube-system/kube-controller-manager-ci-4230.2.4-n-0164ad71e3" Oct 30 23:57:52.106333 kubelet[3068]: I1030 23:57:52.106300 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/37a7d6c02cef60570d9ed1c7fdf99241-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.4-n-0164ad71e3\" (UID: \"37a7d6c02cef60570d9ed1c7fdf99241\") " pod="kube-system/kube-controller-manager-ci-4230.2.4-n-0164ad71e3" Oct 30 23:57:52.106333 kubelet[3068]: I1030 23:57:52.106315 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/37a7d6c02cef60570d9ed1c7fdf99241-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.4-n-0164ad71e3\" (UID: \"37a7d6c02cef60570d9ed1c7fdf99241\") " pod="kube-system/kube-controller-manager-ci-4230.2.4-n-0164ad71e3" Oct 30 23:57:52.206113 kubelet[3068]: I1030 23:57:52.206012 3068 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.4-n-0164ad71e3" Oct 30 23:57:52.207244 kubelet[3068]: E1030 23:57:52.207202 3068 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.15:6443/api/v1/nodes\": dial tcp 10.200.20.15:6443: connect: connection refused" node="ci-4230.2.4-n-0164ad71e3" Oct 30 23:57:52.253476 containerd[1743]: time="2025-10-30T23:57:52.253422292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.4-n-0164ad71e3,Uid:1ce042869476843fe99a9fb3d82c2350,Namespace:kube-system,Attempt:0,}" Oct 30 23:57:52.254387 containerd[1743]: time="2025-10-30T23:57:52.254235254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.4-n-0164ad71e3,Uid:37a7d6c02cef60570d9ed1c7fdf99241,Namespace:kube-system,Attempt:0,}" Oct 30 23:57:52.254387 containerd[1743]: time="2025-10-30T23:57:52.254269534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.4-n-0164ad71e3,Uid:9b16a3892325f64f6119aebf5fc65edd,Namespace:kube-system,Attempt:0,}" Oct 30 23:57:52.310332 kubelet[3068]: E1030 23:57:52.310286 3068 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.4-n-0164ad71e3?timeout=10s\": dial tcp 10.200.20.15:6443: connect: connection refused" interval="800ms" Oct 30 23:57:52.609829 kubelet[3068]: I1030 23:57:52.609720 3068 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.4-n-0164ad71e3" Oct 30 23:57:52.610139 kubelet[3068]: E1030 23:57:52.610076 3068 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.15:6443/api/v1/nodes\": dial tcp 10.200.20.15:6443: connect: connection refused" node="ci-4230.2.4-n-0164ad71e3" Oct 30 23:57:52.720867 kubelet[3068]: W1030 23:57:52.720780 3068 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.15:6443: connect: connection refused Oct 30 23:57:52.720867 kubelet[3068]: E1030 23:57:52.720845 3068 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" Oct 30 23:57:52.722093 kubelet[3068]: W1030 23:57:52.722052 3068 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.15:6443: connect: connection refused Oct 30 23:57:52.722139 kubelet[3068]: E1030 23:57:52.722107 3068 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" Oct 30 23:57:52.930792 kubelet[3068]: W1030 23:57:52.930704 3068 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.4-n-0164ad71e3&limit=500&resourceVersion=0": dial tcp 10.200.20.15:6443: connect: connection refused Oct 30 23:57:52.930792 kubelet[3068]: E1030 23:57:52.930769 3068 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.4-n-0164ad71e3&limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" Oct 30 23:57:52.955578 kubelet[3068]: W1030 23:57:52.955510 3068 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.15:6443: connect: connection refused Oct 30 23:57:52.955578 kubelet[3068]: E1030 23:57:52.955549 3068 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" Oct 30 23:57:53.111003 kubelet[3068]: E1030 23:57:53.110959 3068 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.4-n-0164ad71e3?timeout=10s\": dial tcp 10.200.20.15:6443: connect: connection refused" interval="1.6s" Oct 30 23:57:53.412817 kubelet[3068]: I1030 23:57:53.412493 3068 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.4-n-0164ad71e3" Oct 30 23:57:53.412817 kubelet[3068]: E1030 23:57:53.412806 3068 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.15:6443/api/v1/nodes\": dial tcp 10.200.20.15:6443: connect: connection refused" node="ci-4230.2.4-n-0164ad71e3" Oct 30 23:57:53.721157 kubelet[3068]: E1030 23:57:53.721041 3068 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" Oct 30 23:57:54.196132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1850996946.mount: Deactivated successfully. Oct 30 23:57:54.222839 containerd[1743]: time="2025-10-30T23:57:54.222021673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 23:57:54.231166 containerd[1743]: time="2025-10-30T23:57:54.231110333Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Oct 30 23:57:54.244295 containerd[1743]: time="2025-10-30T23:57:54.243239761Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 23:57:54.247702 containerd[1743]: time="2025-10-30T23:57:54.247046649Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 23:57:54.252140 containerd[1743]: time="2025-10-30T23:57:54.252093861Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 30 23:57:54.267945 containerd[1743]: time="2025-10-30T23:57:54.267417775Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 23:57:54.268557 containerd[1743]: time="2025-10-30T23:57:54.268121097Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 2.013825523s" Oct 30 23:57:54.272917 containerd[1743]: time="2025-10-30T23:57:54.272868307Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 23:57:54.286870 containerd[1743]: time="2025-10-30T23:57:54.286813419Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 30 23:57:54.287557 containerd[1743]: time="2025-10-30T23:57:54.287522540Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 2.034010448s" Oct 30 23:57:54.318673 containerd[1743]: time="2025-10-30T23:57:54.318628130Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 2.064314476s" Oct 30 23:57:54.710596 kubelet[3068]: W1030 23:57:54.710552 3068 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.15:6443: connect: connection refused Oct 30 23:57:54.710736 kubelet[3068]: E1030 23:57:54.710612 3068 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" Oct 30 23:57:54.711728 kubelet[3068]: E1030 23:57:54.711701 3068 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.4-n-0164ad71e3?timeout=10s\": dial tcp 10.200.20.15:6443: connect: connection refused" interval="3.2s" Oct 30 23:57:54.966846 containerd[1743]: time="2025-10-30T23:57:54.966445708Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 30 23:57:54.966846 containerd[1743]: time="2025-10-30T23:57:54.966527788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 30 23:57:54.966846 containerd[1743]: time="2025-10-30T23:57:54.966572508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 30 23:57:54.966846 containerd[1743]: time="2025-10-30T23:57:54.966669028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 30 23:57:54.967581 containerd[1743]: time="2025-10-30T23:57:54.967398430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 30 23:57:54.967581 containerd[1743]: time="2025-10-30T23:57:54.967449470Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 30 23:57:54.967760 containerd[1743]: time="2025-10-30T23:57:54.967648470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 30 23:57:54.968158 containerd[1743]: time="2025-10-30T23:57:54.968112751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 30 23:57:54.972343 containerd[1743]: time="2025-10-30T23:57:54.972050080Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 30 23:57:54.972598 containerd[1743]: time="2025-10-30T23:57:54.972475801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 30 23:57:54.972680 containerd[1743]: time="2025-10-30T23:57:54.972584162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 30 23:57:54.977940 containerd[1743]: time="2025-10-30T23:57:54.976909211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 30 23:57:55.014851 kubelet[3068]: I1030 23:57:55.014815 3068 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.4-n-0164ad71e3" Oct 30 23:57:55.015203 kubelet[3068]: E1030 23:57:55.015159 3068 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.15:6443/api/v1/nodes\": dial tcp 10.200.20.15:6443: connect: connection refused" node="ci-4230.2.4-n-0164ad71e3" Oct 30 23:57:55.041129 systemd[1]: Started cri-containerd-2e36ecbe7ee7ec0590baff44ff9c17f5708ccf918ee4e7a59aff943a2a2158f0.scope - libcontainer container 2e36ecbe7ee7ec0590baff44ff9c17f5708ccf918ee4e7a59aff943a2a2158f0. Oct 30 23:57:55.043418 systemd[1]: Started cri-containerd-915a327b817d856da4ce55476d0fae69c572336db6a33fe265a91a085d21660e.scope - libcontainer container 915a327b817d856da4ce55476d0fae69c572336db6a33fe265a91a085d21660e. Oct 30 23:57:55.044469 systemd[1]: Started cri-containerd-9db0e0f2d9479291713f7078ad09953729777f201374a2d2fe78f58ce26c6bd6.scope - libcontainer container 9db0e0f2d9479291713f7078ad09953729777f201374a2d2fe78f58ce26c6bd6. Oct 30 23:57:55.082993 kubelet[3068]: W1030 23:57:55.082873 3068 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.4-n-0164ad71e3&limit=500&resourceVersion=0": dial tcp 10.200.20.15:6443: connect: connection refused Oct 30 23:57:55.082993 kubelet[3068]: E1030 23:57:55.082943 3068 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.4-n-0164ad71e3&limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" Oct 30 23:57:55.097714 containerd[1743]: time="2025-10-30T23:57:55.097612523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.4-n-0164ad71e3,Uid:1ce042869476843fe99a9fb3d82c2350,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e36ecbe7ee7ec0590baff44ff9c17f5708ccf918ee4e7a59aff943a2a2158f0\"" Oct 30 23:57:55.103945 containerd[1743]: time="2025-10-30T23:57:55.103793337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.4-n-0164ad71e3,Uid:37a7d6c02cef60570d9ed1c7fdf99241,Namespace:kube-system,Attempt:0,} returns sandbox id \"915a327b817d856da4ce55476d0fae69c572336db6a33fe265a91a085d21660e\"" Oct 30 23:57:55.105419 containerd[1743]: time="2025-10-30T23:57:55.105297340Z" level=info msg="CreateContainer within sandbox \"2e36ecbe7ee7ec0590baff44ff9c17f5708ccf918ee4e7a59aff943a2a2158f0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 30 23:57:55.107447 containerd[1743]: time="2025-10-30T23:57:55.107349585Z" level=info msg="CreateContainer within sandbox \"915a327b817d856da4ce55476d0fae69c572336db6a33fe265a91a085d21660e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 30 23:57:55.110689 containerd[1743]: time="2025-10-30T23:57:55.110618352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.4-n-0164ad71e3,Uid:9b16a3892325f64f6119aebf5fc65edd,Namespace:kube-system,Attempt:0,} returns sandbox id \"9db0e0f2d9479291713f7078ad09953729777f201374a2d2fe78f58ce26c6bd6\"" Oct 30 23:57:55.113197 containerd[1743]: time="2025-10-30T23:57:55.113159758Z" level=info msg="CreateContainer within sandbox \"9db0e0f2d9479291713f7078ad09953729777f201374a2d2fe78f58ce26c6bd6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 30 23:57:55.212560 containerd[1743]: time="2025-10-30T23:57:55.212514861Z" level=info msg="CreateContainer within sandbox \"2e36ecbe7ee7ec0590baff44ff9c17f5708ccf918ee4e7a59aff943a2a2158f0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7c12feab4f21210a1e51fd756929025f3847f05be17ca3898ce23f5f8a9c7030\"" Oct 30 23:57:55.214509 containerd[1743]: time="2025-10-30T23:57:55.213363063Z" level=info msg="StartContainer for \"7c12feab4f21210a1e51fd756929025f3847f05be17ca3898ce23f5f8a9c7030\"" Oct 30 23:57:55.244069 systemd[1]: Started cri-containerd-7c12feab4f21210a1e51fd756929025f3847f05be17ca3898ce23f5f8a9c7030.scope - libcontainer container 7c12feab4f21210a1e51fd756929025f3847f05be17ca3898ce23f5f8a9c7030. Oct 30 23:57:55.245318 containerd[1743]: time="2025-10-30T23:57:55.245141775Z" level=info msg="CreateContainer within sandbox \"9db0e0f2d9479291713f7078ad09953729777f201374a2d2fe78f58ce26c6bd6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fa6e3d4a54b837d408aeac9e4cef6bd877ed1c30e360fa67e8979dd8f81462d6\"" Oct 30 23:57:55.246310 containerd[1743]: time="2025-10-30T23:57:55.246152417Z" level=info msg="StartContainer for \"fa6e3d4a54b837d408aeac9e4cef6bd877ed1c30e360fa67e8979dd8f81462d6\"" Oct 30 23:57:55.252091 containerd[1743]: time="2025-10-30T23:57:55.251671629Z" level=info msg="CreateContainer within sandbox \"915a327b817d856da4ce55476d0fae69c572336db6a33fe265a91a085d21660e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ae6bea92c87fec8a7b4e6082776b996e0cb1a709650a5ad2ff5205a71c718e43\"" Oct 30 23:57:55.253204 containerd[1743]: time="2025-10-30T23:57:55.253081513Z" level=info msg="StartContainer for \"ae6bea92c87fec8a7b4e6082776b996e0cb1a709650a5ad2ff5205a71c718e43\"" Oct 30 23:57:55.285059 systemd[1]: Started cri-containerd-fa6e3d4a54b837d408aeac9e4cef6bd877ed1c30e360fa67e8979dd8f81462d6.scope - libcontainer container fa6e3d4a54b837d408aeac9e4cef6bd877ed1c30e360fa67e8979dd8f81462d6. Oct 30 23:57:55.294273 systemd[1]: Started cri-containerd-ae6bea92c87fec8a7b4e6082776b996e0cb1a709650a5ad2ff5205a71c718e43.scope - libcontainer container ae6bea92c87fec8a7b4e6082776b996e0cb1a709650a5ad2ff5205a71c718e43. Oct 30 23:57:55.301934 containerd[1743]: time="2025-10-30T23:57:55.301866582Z" level=info msg="StartContainer for \"7c12feab4f21210a1e51fd756929025f3847f05be17ca3898ce23f5f8a9c7030\" returns successfully" Oct 30 23:57:55.302678 kubelet[3068]: W1030 23:57:55.302463 3068 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.15:6443: connect: connection refused Oct 30 23:57:55.302678 kubelet[3068]: E1030 23:57:55.302499 3068 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" Oct 30 23:57:55.356864 containerd[1743]: time="2025-10-30T23:57:55.356696986Z" level=info msg="StartContainer for \"fa6e3d4a54b837d408aeac9e4cef6bd877ed1c30e360fa67e8979dd8f81462d6\" returns successfully" Oct 30 23:57:55.356864 containerd[1743]: time="2025-10-30T23:57:55.356696946Z" level=info msg="StartContainer for \"ae6bea92c87fec8a7b4e6082776b996e0cb1a709650a5ad2ff5205a71c718e43\" returns successfully" Oct 30 23:57:55.891658 kubelet[3068]: E1030 23:57:55.891615 3068 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.4-n-0164ad71e3\" not found" node="ci-4230.2.4-n-0164ad71e3" Oct 30 23:57:55.895609 kubelet[3068]: E1030 23:57:55.895461 3068 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.4-n-0164ad71e3\" not found" node="ci-4230.2.4-n-0164ad71e3" Oct 30 23:57:55.898607 kubelet[3068]: E1030 23:57:55.898583 3068 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.4-n-0164ad71e3\" not found" node="ci-4230.2.4-n-0164ad71e3" Oct 30 23:57:56.902900 kubelet[3068]: E1030 23:57:56.901093 3068 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.4-n-0164ad71e3\" not found" node="ci-4230.2.4-n-0164ad71e3" Oct 30 23:57:56.902900 kubelet[3068]: E1030 23:57:56.901246 3068 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.4-n-0164ad71e3\" not found" node="ci-4230.2.4-n-0164ad71e3" Oct 30 23:57:57.641567 kubelet[3068]: E1030 23:57:57.641533 3068 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4230.2.4-n-0164ad71e3" not found Oct 30 23:57:57.917718 kubelet[3068]: E1030 23:57:57.917606 3068 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.2.4-n-0164ad71e3\" not found" node="ci-4230.2.4-n-0164ad71e3" Oct 30 23:57:58.015174 kubelet[3068]: E1030 23:57:58.015139 3068 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4230.2.4-n-0164ad71e3" not found Oct 30 23:57:58.217998 kubelet[3068]: I1030 23:57:58.217690 3068 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.4-n-0164ad71e3" Oct 30 23:57:58.228219 kubelet[3068]: I1030 23:57:58.228170 3068 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.4-n-0164ad71e3" Oct 30 23:57:58.228219 kubelet[3068]: E1030 23:57:58.228210 3068 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4230.2.4-n-0164ad71e3\": node \"ci-4230.2.4-n-0164ad71e3\" not found" Oct 30 23:57:58.239982 kubelet[3068]: E1030 23:57:58.239936 3068 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.4-n-0164ad71e3\" not found" Oct 30 23:57:58.340591 kubelet[3068]: E1030 23:57:58.340537 3068 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.4-n-0164ad71e3\" not found" Oct 30 23:57:58.441245 kubelet[3068]: E1030 23:57:58.441201 3068 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.4-n-0164ad71e3\" not found" Oct 30 23:57:58.542006 kubelet[3068]: E1030 23:57:58.541327 3068 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.4-n-0164ad71e3\" not found" Oct 30 23:57:58.641923 kubelet[3068]: E1030 23:57:58.641859 3068 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.4-n-0164ad71e3\" not found" Oct 30 23:57:58.743241 kubelet[3068]: E1030 23:57:58.742935 3068 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.4-n-0164ad71e3\" not found" Oct 30 23:57:58.844147 kubelet[3068]: E1030 23:57:58.844015 3068 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.4-n-0164ad71e3\" not found" Oct 30 23:57:58.945073 kubelet[3068]: E1030 23:57:58.945019 3068 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.4-n-0164ad71e3\" not found" Oct 30 23:57:59.045742 kubelet[3068]: E1030 23:57:59.045703 3068 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.4-n-0164ad71e3\" not found" Oct 30 23:57:59.146387 kubelet[3068]: E1030 23:57:59.146349 3068 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.4-n-0164ad71e3\" not found" Oct 30 23:57:59.247358 kubelet[3068]: E1030 23:57:59.247313 3068 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.4-n-0164ad71e3\" not found" Oct 30 23:57:59.348402 kubelet[3068]: E1030 23:57:59.348358 3068 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.4-n-0164ad71e3\" not found" Oct 30 23:57:59.449376 kubelet[3068]: E1030 23:57:59.449250 3068 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.4-n-0164ad71e3\" not found" Oct 30 23:57:59.549953 kubelet[3068]: E1030 23:57:59.549914 3068 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.4-n-0164ad71e3\" not found" Oct 30 23:57:59.650996 kubelet[3068]: E1030 23:57:59.650943 3068 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.4-n-0164ad71e3\" not found" Oct 30 23:57:59.666260 systemd[1]: Reload requested from client PID 3346 ('systemctl') (unit session-9.scope)... Oct 30 23:57:59.666547 systemd[1]: Reloading... Oct 30 23:57:59.751385 kubelet[3068]: E1030 23:57:59.751258 3068 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.4-n-0164ad71e3\" not found" Oct 30 23:57:59.761927 zram_generator::config[3393]: No configuration found. Oct 30 23:57:59.851825 kubelet[3068]: E1030 23:57:59.851782 3068 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.4-n-0164ad71e3\" not found" Oct 30 23:57:59.867291 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 30 23:57:59.952749 kubelet[3068]: E1030 23:57:59.952708 3068 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.4-n-0164ad71e3\" not found" Oct 30 23:57:59.984059 systemd[1]: Reloading finished in 317 ms. Oct 30 23:58:00.009016 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 23:58:00.022301 systemd[1]: kubelet.service: Deactivated successfully. Oct 30 23:58:00.022510 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 23:58:00.022556 systemd[1]: kubelet.service: Consumed 1.775s CPU time, 126.4M memory peak. Oct 30 23:58:00.027292 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 23:58:00.169267 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 23:58:00.179234 (kubelet)[3457]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 30 23:58:00.275289 kubelet[3457]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 23:58:00.275289 kubelet[3457]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 30 23:58:00.275289 kubelet[3457]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 23:58:00.275289 kubelet[3457]: I1030 23:58:00.273357 3457 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 30 23:58:00.281953 kubelet[3457]: I1030 23:58:00.281430 3457 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 30 23:58:00.281953 kubelet[3457]: I1030 23:58:00.281459 3457 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 30 23:58:00.281953 kubelet[3457]: I1030 23:58:00.281712 3457 server.go:954] "Client rotation is on, will bootstrap in background" Oct 30 23:58:00.283573 kubelet[3457]: I1030 23:58:00.283390 3457 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 30 23:58:00.286919 kubelet[3457]: I1030 23:58:00.285549 3457 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 30 23:58:00.293070 kubelet[3457]: E1030 23:58:00.292931 3457 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 30 23:58:00.293070 kubelet[3457]: I1030 23:58:00.292969 3457 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 30 23:58:00.295996 kubelet[3457]: I1030 23:58:00.295968 3457 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 30 23:58:00.296224 kubelet[3457]: I1030 23:58:00.296194 3457 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 30 23:58:00.296395 kubelet[3457]: I1030 23:58:00.296224 3457 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.4-n-0164ad71e3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 30 23:58:00.296481 kubelet[3457]: I1030 23:58:00.296404 3457 topology_manager.go:138] "Creating topology manager with none policy" Oct 30 23:58:00.296481 kubelet[3457]: I1030 23:58:00.296412 3457 container_manager_linux.go:304] "Creating device plugin manager" Oct 30 23:58:00.296481 kubelet[3457]: I1030 23:58:00.296451 3457 state_mem.go:36] "Initialized new in-memory state store" Oct 30 23:58:00.296575 kubelet[3457]: I1030 23:58:00.296563 3457 kubelet.go:446] "Attempting to sync node with API server" Oct 30 23:58:00.296604 kubelet[3457]: I1030 23:58:00.296578 3457 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 30 23:58:00.296604 kubelet[3457]: I1030 23:58:00.296596 3457 kubelet.go:352] "Adding apiserver pod source" Oct 30 23:58:00.296953 kubelet[3457]: I1030 23:58:00.296605 3457 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 30 23:58:00.302742 kubelet[3457]: I1030 23:58:00.301690 3457 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Oct 30 23:58:00.302742 kubelet[3457]: I1030 23:58:00.302414 3457 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 30 23:58:00.305843 kubelet[3457]: I1030 23:58:00.305325 3457 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 30 23:58:00.305843 kubelet[3457]: I1030 23:58:00.305374 3457 server.go:1287] "Started kubelet" Oct 30 23:58:00.312203 kubelet[3457]: I1030 23:58:00.312065 3457 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 30 23:58:00.314109 kubelet[3457]: I1030 23:58:00.314028 3457 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 30 23:58:00.314109 kubelet[3457]: I1030 23:58:00.311948 3457 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 30 23:58:00.315175 kubelet[3457]: I1030 23:58:00.315092 3457 server.go:479] "Adding debug handlers to kubelet server" Oct 30 23:58:00.316646 kubelet[3457]: I1030 23:58:00.316519 3457 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 30 23:58:00.317216 kubelet[3457]: I1030 23:58:00.317189 3457 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 30 23:58:00.317960 kubelet[3457]: I1030 23:58:00.317947 3457 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 30 23:58:00.318353 kubelet[3457]: E1030 23:58:00.318241 3457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.4-n-0164ad71e3\" not found" Oct 30 23:58:00.319594 kubelet[3457]: I1030 23:58:00.319579 3457 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 30 23:58:00.319833 kubelet[3457]: I1030 23:58:00.319816 3457 reconciler.go:26] "Reconciler: start to sync state" Oct 30 23:58:00.324022 kubelet[3457]: I1030 23:58:00.324005 3457 factory.go:221] Registration of the systemd container factory successfully Oct 30 23:58:00.324205 kubelet[3457]: I1030 23:58:00.324137 3457 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 30 23:58:00.335911 kubelet[3457]: E1030 23:58:00.335273 3457 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 30 23:58:00.335911 kubelet[3457]: I1030 23:58:00.335601 3457 factory.go:221] Registration of the containerd container factory successfully Oct 30 23:58:00.371655 kubelet[3457]: I1030 23:58:00.371584 3457 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 30 23:58:00.372830 kubelet[3457]: I1030 23:58:00.372809 3457 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 30 23:58:03.749707 kubelet[3457]: I1030 23:58:00.372905 3457 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 30 23:58:03.749707 kubelet[3457]: I1030 23:58:00.372928 3457 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 30 23:58:03.749707 kubelet[3457]: I1030 23:58:00.372935 3457 kubelet.go:2382] "Starting kubelet main sync loop" Oct 30 23:58:03.749707 kubelet[3457]: E1030 23:58:00.372971 3457 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 30 23:58:03.749707 kubelet[3457]: I1030 23:58:00.403049 3457 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 30 23:58:03.749707 kubelet[3457]: I1030 23:58:00.403063 3457 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 30 23:58:03.749707 kubelet[3457]: I1030 23:58:00.403083 3457 state_mem.go:36] "Initialized new in-memory state store" Oct 30 23:58:03.749707 kubelet[3457]: E1030 23:58:00.473697 3457 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 30 23:58:03.749707 kubelet[3457]: E1030 23:58:00.674087 3457 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 30 23:58:03.749707 kubelet[3457]: E1030 23:58:01.075011 3457 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 30 23:58:03.749707 kubelet[3457]: I1030 23:58:01.297940 3457 apiserver.go:52] "Watching apiserver" Oct 30 23:58:03.749707 kubelet[3457]: E1030 23:58:01.875964 3457 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 30 23:58:03.749707 kubelet[3457]: E1030 23:58:03.476834 3457 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 30 23:58:03.752271 kubelet[3457]: I1030 23:58:03.750823 3457 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 30 23:58:03.752271 kubelet[3457]: I1030 23:58:03.750842 3457 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 30 23:58:03.752271 kubelet[3457]: I1030 23:58:03.750862 3457 policy_none.go:49] "None policy: Start" Oct 30 23:58:03.752271 kubelet[3457]: I1030 23:58:03.750873 3457 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 30 23:58:03.752271 kubelet[3457]: I1030 23:58:03.750965 3457 state_mem.go:35] "Initializing new in-memory state store" Oct 30 23:58:03.752271 kubelet[3457]: I1030 23:58:03.751090 3457 state_mem.go:75] "Updated machine memory state" Oct 30 23:58:03.757891 kubelet[3457]: I1030 23:58:03.757862 3457 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 30 23:58:03.759204 kubelet[3457]: I1030 23:58:03.758337 3457 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 30 23:58:03.759204 kubelet[3457]: I1030 23:58:03.758354 3457 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 30 23:58:03.759204 kubelet[3457]: I1030 23:58:03.758727 3457 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 30 23:58:03.761858 kubelet[3457]: E1030 23:58:03.761521 3457 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 30 23:58:03.806354 sudo[3492]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Oct 30 23:58:03.806636 sudo[3492]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Oct 30 23:58:03.870544 kubelet[3457]: I1030 23:58:03.870508 3457 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.4-n-0164ad71e3" Oct 30 23:58:03.886961 kubelet[3457]: I1030 23:58:03.886499 3457 kubelet_node_status.go:124] "Node was previously registered" node="ci-4230.2.4-n-0164ad71e3" Oct 30 23:58:03.886961 kubelet[3457]: I1030 23:58:03.886576 3457 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.4-n-0164ad71e3" Oct 30 23:58:04.256230 sudo[3492]: pam_unix(sudo:session): session closed for user root Oct 30 23:58:05.170466 kubelet[3457]: I1030 23:58:05.170426 3457 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 30 23:58:05.171154 kubelet[3457]: I1030 23:58:05.170991 3457 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 30 23:58:05.171188 containerd[1743]: time="2025-10-30T23:58:05.170721261Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 30 23:58:06.677547 kubelet[3457]: I1030 23:58:06.677358 3457 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.4-n-0164ad71e3" Oct 30 23:58:06.691548 systemd[1]: Created slice kubepods-besteffort-pod2e3fb740_ad98_4afc_ad6d_16af7856efc2.slice - libcontainer container kubepods-besteffort-pod2e3fb740_ad98_4afc_ad6d_16af7856efc2.slice. Oct 30 23:58:07.654727 sshd[2443]: Connection closed by 10.200.16.10 port 53320 Oct 30 23:58:07.654833 kubelet[3457]: I1030 23:58:06.677358 3457 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.4-n-0164ad71e3" Oct 30 23:58:07.654833 kubelet[3457]: I1030 23:58:06.678982 3457 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.4-n-0164ad71e3" Oct 30 23:58:07.654833 kubelet[3457]: W1030 23:58:06.695081 3457 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 30 23:58:07.654833 kubelet[3457]: W1030 23:58:06.695617 3457 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 30 23:58:07.654833 kubelet[3457]: W1030 23:58:06.696786 3457 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 30 23:58:07.654833 kubelet[3457]: I1030 23:58:06.720834 3457 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 30 23:58:07.654833 kubelet[3457]: I1030 23:58:06.751462 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cf34bf6e-2f6c-4fcf-863d-7d5010123939-etc-cni-netd\") pod \"cilium-t76xf\" (UID: \"cf34bf6e-2f6c-4fcf-863d-7d5010123939\") " pod="kube-system/cilium-t76xf" Oct 30 23:58:07.654833 kubelet[3457]: I1030 23:58:06.751500 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-252cw\" (UniqueName: \"kubernetes.io/projected/cf34bf6e-2f6c-4fcf-863d-7d5010123939-kube-api-access-252cw\") pod \"cilium-t76xf\" (UID: \"cf34bf6e-2f6c-4fcf-863d-7d5010123939\") " pod="kube-system/cilium-t76xf" Oct 30 23:58:07.529918 sudo[2444]: pam_unix(sudo:session): session closed for user root Oct 30 23:58:06.706982 systemd[1]: Created slice kubepods-burstable-podcf34bf6e_2f6c_4fcf_863d_7d5010123939.slice - libcontainer container kubepods-burstable-podcf34bf6e_2f6c_4fcf_863d_7d5010123939.slice. Oct 30 23:58:07.655180 kubelet[3457]: I1030 23:58:06.751521 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6275dbb-2c27-4e0f-baea-89ef852cccb7-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-v82mc\" (UID: \"c6275dbb-2c27-4e0f-baea-89ef852cccb7\") " pod="kube-system/cilium-operator-6c4d7847fc-v82mc" Oct 30 23:58:07.655180 kubelet[3457]: I1030 23:58:06.751537 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/37a7d6c02cef60570d9ed1c7fdf99241-ca-certs\") pod \"kube-controller-manager-ci-4230.2.4-n-0164ad71e3\" (UID: \"37a7d6c02cef60570d9ed1c7fdf99241\") " pod="kube-system/kube-controller-manager-ci-4230.2.4-n-0164ad71e3" Oct 30 23:58:07.655180 kubelet[3457]: I1030 23:58:06.751555 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e3fb740-ad98-4afc-ad6d-16af7856efc2-xtables-lock\") pod \"kube-proxy-zbs5g\" (UID: \"2e3fb740-ad98-4afc-ad6d-16af7856efc2\") " pod="kube-system/kube-proxy-zbs5g" Oct 30 23:58:07.655180 kubelet[3457]: I1030 23:58:06.751571 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cf34bf6e-2f6c-4fcf-863d-7d5010123939-cni-path\") pod \"cilium-t76xf\" (UID: \"cf34bf6e-2f6c-4fcf-863d-7d5010123939\") " pod="kube-system/cilium-t76xf" Oct 30 23:58:07.655180 kubelet[3457]: I1030 23:58:06.751586 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cf34bf6e-2f6c-4fcf-863d-7d5010123939-host-proc-sys-kernel\") pod \"cilium-t76xf\" (UID: \"cf34bf6e-2f6c-4fcf-863d-7d5010123939\") " pod="kube-system/cilium-t76xf" Oct 30 23:58:07.602142 sshd-session[2441]: pam_unix(sshd:session): session closed for user core Oct 30 23:58:06.712944 systemd[1]: Created slice kubepods-besteffort-podc6275dbb_2c27_4e0f_baea_89ef852cccb7.slice - libcontainer container kubepods-besteffort-podc6275dbb_2c27_4e0f_baea_89ef852cccb7.slice. Oct 30 23:58:07.655412 kubelet[3457]: I1030 23:58:06.751604 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/37a7d6c02cef60570d9ed1c7fdf99241-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.4-n-0164ad71e3\" (UID: \"37a7d6c02cef60570d9ed1c7fdf99241\") " pod="kube-system/kube-controller-manager-ci-4230.2.4-n-0164ad71e3" Oct 30 23:58:07.655412 kubelet[3457]: I1030 23:58:06.751624 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cf34bf6e-2f6c-4fcf-863d-7d5010123939-cilium-cgroup\") pod \"cilium-t76xf\" (UID: \"cf34bf6e-2f6c-4fcf-863d-7d5010123939\") " pod="kube-system/cilium-t76xf" Oct 30 23:58:07.655412 kubelet[3457]: I1030 23:58:06.751639 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cf34bf6e-2f6c-4fcf-863d-7d5010123939-cilium-run\") pod \"cilium-t76xf\" (UID: \"cf34bf6e-2f6c-4fcf-863d-7d5010123939\") " pod="kube-system/cilium-t76xf" Oct 30 23:58:07.655412 kubelet[3457]: I1030 23:58:06.751653 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cf34bf6e-2f6c-4fcf-863d-7d5010123939-bpf-maps\") pod \"cilium-t76xf\" (UID: \"cf34bf6e-2f6c-4fcf-863d-7d5010123939\") " pod="kube-system/cilium-t76xf" Oct 30 23:58:07.655412 kubelet[3457]: I1030 23:58:06.751668 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cf34bf6e-2f6c-4fcf-863d-7d5010123939-clustermesh-secrets\") pod \"cilium-t76xf\" (UID: \"cf34bf6e-2f6c-4fcf-863d-7d5010123939\") " pod="kube-system/cilium-t76xf" Oct 30 23:58:07.604960 systemd-logind[1718]: Session 9 logged out. Waiting for processes to exit. Oct 30 23:58:07.655667 kubelet[3457]: I1030 23:58:06.751684 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cf34bf6e-2f6c-4fcf-863d-7d5010123939-hubble-tls\") pod \"cilium-t76xf\" (UID: \"cf34bf6e-2f6c-4fcf-863d-7d5010123939\") " pod="kube-system/cilium-t76xf" Oct 30 23:58:07.655667 kubelet[3457]: I1030 23:58:06.751699 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b16a3892325f64f6119aebf5fc65edd-k8s-certs\") pod \"kube-apiserver-ci-4230.2.4-n-0164ad71e3\" (UID: \"9b16a3892325f64f6119aebf5fc65edd\") " pod="kube-system/kube-apiserver-ci-4230.2.4-n-0164ad71e3" Oct 30 23:58:07.655667 kubelet[3457]: I1030 23:58:06.751714 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1ce042869476843fe99a9fb3d82c2350-kubeconfig\") pod \"kube-scheduler-ci-4230.2.4-n-0164ad71e3\" (UID: \"1ce042869476843fe99a9fb3d82c2350\") " pod="kube-system/kube-scheduler-ci-4230.2.4-n-0164ad71e3" Oct 30 23:58:07.655667 kubelet[3457]: I1030 23:58:06.751728 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e3fb740-ad98-4afc-ad6d-16af7856efc2-lib-modules\") pod \"kube-proxy-zbs5g\" (UID: \"2e3fb740-ad98-4afc-ad6d-16af7856efc2\") " pod="kube-system/kube-proxy-zbs5g" Oct 30 23:58:07.655667 kubelet[3457]: I1030 23:58:06.751743 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j524\" (UniqueName: \"kubernetes.io/projected/2e3fb740-ad98-4afc-ad6d-16af7856efc2-kube-api-access-2j524\") pod \"kube-proxy-zbs5g\" (UID: \"2e3fb740-ad98-4afc-ad6d-16af7856efc2\") " pod="kube-system/kube-proxy-zbs5g" Oct 30 23:58:07.605239 systemd[1]: sshd@6-10.200.20.15:22-10.200.16.10:53320.service: Deactivated successfully. Oct 30 23:58:07.655808 kubelet[3457]: I1030 23:58:06.751758 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cf34bf6e-2f6c-4fcf-863d-7d5010123939-cilium-config-path\") pod \"cilium-t76xf\" (UID: \"cf34bf6e-2f6c-4fcf-863d-7d5010123939\") " pod="kube-system/cilium-t76xf" Oct 30 23:58:07.655808 kubelet[3457]: I1030 23:58:06.751782 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgn44\" (UniqueName: \"kubernetes.io/projected/c6275dbb-2c27-4e0f-baea-89ef852cccb7-kube-api-access-cgn44\") pod \"cilium-operator-6c4d7847fc-v82mc\" (UID: \"c6275dbb-2c27-4e0f-baea-89ef852cccb7\") " pod="kube-system/cilium-operator-6c4d7847fc-v82mc" Oct 30 23:58:07.655808 kubelet[3457]: I1030 23:58:06.751797 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/37a7d6c02cef60570d9ed1c7fdf99241-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.4-n-0164ad71e3\" (UID: \"37a7d6c02cef60570d9ed1c7fdf99241\") " pod="kube-system/kube-controller-manager-ci-4230.2.4-n-0164ad71e3" Oct 30 23:58:07.655808 kubelet[3457]: I1030 23:58:06.751811 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cf34bf6e-2f6c-4fcf-863d-7d5010123939-hostproc\") pod \"cilium-t76xf\" (UID: \"cf34bf6e-2f6c-4fcf-863d-7d5010123939\") " pod="kube-system/cilium-t76xf" Oct 30 23:58:07.655808 kubelet[3457]: I1030 23:58:06.751827 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cf34bf6e-2f6c-4fcf-863d-7d5010123939-host-proc-sys-net\") pod \"cilium-t76xf\" (UID: \"cf34bf6e-2f6c-4fcf-863d-7d5010123939\") " pod="kube-system/cilium-t76xf" Oct 30 23:58:07.607533 systemd[1]: session-9.scope: Deactivated successfully. Oct 30 23:58:07.655978 kubelet[3457]: I1030 23:58:06.751842 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b16a3892325f64f6119aebf5fc65edd-ca-certs\") pod \"kube-apiserver-ci-4230.2.4-n-0164ad71e3\" (UID: \"9b16a3892325f64f6119aebf5fc65edd\") " pod="kube-system/kube-apiserver-ci-4230.2.4-n-0164ad71e3" Oct 30 23:58:07.655978 kubelet[3457]: I1030 23:58:06.751857 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b16a3892325f64f6119aebf5fc65edd-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.4-n-0164ad71e3\" (UID: \"9b16a3892325f64f6119aebf5fc65edd\") " pod="kube-system/kube-apiserver-ci-4230.2.4-n-0164ad71e3" Oct 30 23:58:07.655978 kubelet[3457]: I1030 23:58:06.751872 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf34bf6e-2f6c-4fcf-863d-7d5010123939-lib-modules\") pod \"cilium-t76xf\" (UID: \"cf34bf6e-2f6c-4fcf-863d-7d5010123939\") " pod="kube-system/cilium-t76xf" Oct 30 23:58:07.655978 kubelet[3457]: I1030 23:58:06.751911 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf34bf6e-2f6c-4fcf-863d-7d5010123939-xtables-lock\") pod \"cilium-t76xf\" (UID: \"cf34bf6e-2f6c-4fcf-863d-7d5010123939\") " pod="kube-system/cilium-t76xf" Oct 30 23:58:07.655978 kubelet[3457]: I1030 23:58:06.751927 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/37a7d6c02cef60570d9ed1c7fdf99241-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.4-n-0164ad71e3\" (UID: \"37a7d6c02cef60570d9ed1c7fdf99241\") " pod="kube-system/kube-controller-manager-ci-4230.2.4-n-0164ad71e3" Oct 30 23:58:07.607780 systemd[1]: session-9.scope: Consumed 7.272s CPU time, 261.1M memory peak. Oct 30 23:58:07.656649 kubelet[3457]: I1030 23:58:06.751944 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/37a7d6c02cef60570d9ed1c7fdf99241-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.4-n-0164ad71e3\" (UID: \"37a7d6c02cef60570d9ed1c7fdf99241\") " pod="kube-system/kube-controller-manager-ci-4230.2.4-n-0164ad71e3" Oct 30 23:58:07.656649 kubelet[3457]: I1030 23:58:06.751960 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2e3fb740-ad98-4afc-ad6d-16af7856efc2-kube-proxy\") pod \"kube-proxy-zbs5g\" (UID: \"2e3fb740-ad98-4afc-ad6d-16af7856efc2\") " pod="kube-system/kube-proxy-zbs5g" Oct 30 23:58:07.656649 kubelet[3457]: I1030 23:58:06.886218 3457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.2.4-n-0164ad71e3" podStartSLOduration=0.886198241 podStartE2EDuration="886.198241ms" podCreationTimestamp="2025-10-30 23:58:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 23:58:06.84792572 +0000 UTC m=+6.664822478" watchObservedRunningTime="2025-10-30 23:58:06.886198241 +0000 UTC m=+6.703094959" Oct 30 23:58:07.656649 kubelet[3457]: I1030 23:58:06.938339 3457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.2.4-n-0164ad71e3" podStartSLOduration=0.938320672 podStartE2EDuration="938.320672ms" podCreationTimestamp="2025-10-30 23:58:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 23:58:06.886691722 +0000 UTC m=+6.703588440" watchObservedRunningTime="2025-10-30 23:58:06.938320672 +0000 UTC m=+6.755217390" Oct 30 23:58:07.610395 systemd-logind[1718]: Removed session 9. Oct 30 23:58:07.656821 kubelet[3457]: I1030 23:58:07.019104 3457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.2.4-n-0164ad71e3" podStartSLOduration=1.019085565 podStartE2EDuration="1.019085565s" podCreationTimestamp="2025-10-30 23:58:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 23:58:06.938480153 +0000 UTC m=+6.755376871" watchObservedRunningTime="2025-10-30 23:58:07.019085565 +0000 UTC m=+6.835982243" Oct 30 23:58:07.751484 containerd[1743]: time="2025-10-30T23:58:07.751202007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zbs5g,Uid:2e3fb740-ad98-4afc-ad6d-16af7856efc2,Namespace:kube-system,Attempt:0,}" Oct 30 23:58:07.757371 containerd[1743]: time="2025-10-30T23:58:07.757320260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t76xf,Uid:cf34bf6e-2f6c-4fcf-863d-7d5010123939,Namespace:kube-system,Attempt:0,}" Oct 30 23:58:07.765295 containerd[1743]: time="2025-10-30T23:58:07.765064396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-v82mc,Uid:c6275dbb-2c27-4e0f-baea-89ef852cccb7,Namespace:kube-system,Attempt:0,}" Oct 30 23:58:09.317771 containerd[1743]: time="2025-10-30T23:58:09.317465802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 30 23:58:09.317771 containerd[1743]: time="2025-10-30T23:58:09.317584722Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 30 23:58:09.317771 containerd[1743]: time="2025-10-30T23:58:09.317603402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 30 23:58:09.317771 containerd[1743]: time="2025-10-30T23:58:09.317706482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 30 23:58:09.344076 systemd[1]: Started cri-containerd-fd3768104b3842f3626ae22501e29c8089028b04d4d687fc675d1b354c041212.scope - libcontainer container fd3768104b3842f3626ae22501e29c8089028b04d4d687fc675d1b354c041212. Oct 30 23:58:09.365041 containerd[1743]: time="2025-10-30T23:58:09.364959121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zbs5g,Uid:2e3fb740-ad98-4afc-ad6d-16af7856efc2,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd3768104b3842f3626ae22501e29c8089028b04d4d687fc675d1b354c041212\"" Oct 30 23:58:09.370512 containerd[1743]: time="2025-10-30T23:58:09.370470650Z" level=info msg="CreateContainer within sandbox \"fd3768104b3842f3626ae22501e29c8089028b04d4d687fc675d1b354c041212\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 30 23:58:09.467948 containerd[1743]: time="2025-10-30T23:58:09.467815451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 30 23:58:09.468178 containerd[1743]: time="2025-10-30T23:58:09.467875171Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 30 23:58:09.468178 containerd[1743]: time="2025-10-30T23:58:09.468121611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 30 23:58:09.468460 containerd[1743]: time="2025-10-30T23:58:09.468372932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 30 23:58:09.485058 systemd[1]: Started cri-containerd-97a31dc29111cdcd1a83e18e2a2c1a8f329779bc876aa17f67ede256eaf5e578.scope - libcontainer container 97a31dc29111cdcd1a83e18e2a2c1a8f329779bc876aa17f67ede256eaf5e578. Oct 30 23:58:09.507001 containerd[1743]: time="2025-10-30T23:58:09.506956556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t76xf,Uid:cf34bf6e-2f6c-4fcf-863d-7d5010123939,Namespace:kube-system,Attempt:0,} returns sandbox id \"97a31dc29111cdcd1a83e18e2a2c1a8f329779bc876aa17f67ede256eaf5e578\"" Oct 30 23:58:09.510325 containerd[1743]: time="2025-10-30T23:58:09.510282641Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 30 23:58:09.618763 containerd[1743]: time="2025-10-30T23:58:09.618341140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 30 23:58:09.618763 containerd[1743]: time="2025-10-30T23:58:09.618433580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 30 23:58:09.618763 containerd[1743]: time="2025-10-30T23:58:09.618451500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 30 23:58:09.619519 containerd[1743]: time="2025-10-30T23:58:09.619325902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 30 23:58:09.639135 systemd[1]: Started cri-containerd-a776050b7b93ab68a7d0a35c103485dbe04650aee84ec3c82c7ee147544f8a32.scope - libcontainer container a776050b7b93ab68a7d0a35c103485dbe04650aee84ec3c82c7ee147544f8a32. Oct 30 23:58:09.671554 containerd[1743]: time="2025-10-30T23:58:09.671479148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-v82mc,Uid:c6275dbb-2c27-4e0f-baea-89ef852cccb7,Namespace:kube-system,Attempt:0,} returns sandbox id \"a776050b7b93ab68a7d0a35c103485dbe04650aee84ec3c82c7ee147544f8a32\"" Oct 30 23:58:09.903174 containerd[1743]: time="2025-10-30T23:58:09.903125851Z" level=info msg="CreateContainer within sandbox \"fd3768104b3842f3626ae22501e29c8089028b04d4d687fc675d1b354c041212\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fb57c0ba3d5a7e0e4fe4eca4e479cf87a0e0e9d8dd70daa9aa8c014bc85dfb14\"" Oct 30 23:58:09.904548 containerd[1743]: time="2025-10-30T23:58:09.904377893Z" level=info msg="StartContainer for \"fb57c0ba3d5a7e0e4fe4eca4e479cf87a0e0e9d8dd70daa9aa8c014bc85dfb14\"" Oct 30 23:58:09.929358 systemd[1]: Started cri-containerd-fb57c0ba3d5a7e0e4fe4eca4e479cf87a0e0e9d8dd70daa9aa8c014bc85dfb14.scope - libcontainer container fb57c0ba3d5a7e0e4fe4eca4e479cf87a0e0e9d8dd70daa9aa8c014bc85dfb14. Oct 30 23:58:09.964414 containerd[1743]: time="2025-10-30T23:58:09.964253712Z" level=info msg="StartContainer for \"fb57c0ba3d5a7e0e4fe4eca4e479cf87a0e0e9d8dd70daa9aa8c014bc85dfb14\" returns successfully" Oct 30 23:58:10.689516 kubelet[3457]: I1030 23:58:10.689251 3457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zbs5g" podStartSLOduration=5.689232352 podStartE2EDuration="5.689232352s" podCreationTimestamp="2025-10-30 23:58:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 23:58:10.419735906 +0000 UTC m=+10.236632624" watchObservedRunningTime="2025-10-30 23:58:10.689232352 +0000 UTC m=+10.506129070" Oct 30 23:58:15.765670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2366550701.mount: Deactivated successfully. Oct 30 23:58:21.309023 containerd[1743]: time="2025-10-30T23:58:21.308965732Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:58:23.358221 containerd[1743]: time="2025-10-30T23:58:23.358151476Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Oct 30 23:58:23.406545 containerd[1743]: time="2025-10-30T23:58:23.406050861Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:58:23.408518 containerd[1743]: time="2025-10-30T23:58:23.407827265Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 13.897502384s" Oct 30 23:58:23.408518 containerd[1743]: time="2025-10-30T23:58:23.407867585Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Oct 30 23:58:23.410402 containerd[1743]: time="2025-10-30T23:58:23.410338631Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 30 23:58:23.411746 containerd[1743]: time="2025-10-30T23:58:23.411699914Z" level=info msg="CreateContainer within sandbox \"97a31dc29111cdcd1a83e18e2a2c1a8f329779bc876aa17f67ede256eaf5e578\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 30 23:58:23.750377 containerd[1743]: time="2025-10-30T23:58:23.750324738Z" level=info msg="CreateContainer within sandbox \"97a31dc29111cdcd1a83e18e2a2c1a8f329779bc876aa17f67ede256eaf5e578\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"dc0e20d4f61e4a811308f3edfed0b1ccfa2ab37ca307aecd08b605d4dcb53e49\"" Oct 30 23:58:23.751299 containerd[1743]: time="2025-10-30T23:58:23.751161900Z" level=info msg="StartContainer for \"dc0e20d4f61e4a811308f3edfed0b1ccfa2ab37ca307aecd08b605d4dcb53e49\"" Oct 30 23:58:23.783113 systemd[1]: Started cri-containerd-dc0e20d4f61e4a811308f3edfed0b1ccfa2ab37ca307aecd08b605d4dcb53e49.scope - libcontainer container dc0e20d4f61e4a811308f3edfed0b1ccfa2ab37ca307aecd08b605d4dcb53e49. Oct 30 23:58:23.836284 systemd[1]: cri-containerd-dc0e20d4f61e4a811308f3edfed0b1ccfa2ab37ca307aecd08b605d4dcb53e49.scope: Deactivated successfully. Oct 30 23:58:23.849654 containerd[1743]: time="2025-10-30T23:58:23.849607076Z" level=info msg="StartContainer for \"dc0e20d4f61e4a811308f3edfed0b1ccfa2ab37ca307aecd08b605d4dcb53e49\" returns successfully" Oct 30 23:58:24.612227 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc0e20d4f61e4a811308f3edfed0b1ccfa2ab37ca307aecd08b605d4dcb53e49-rootfs.mount: Deactivated successfully. Oct 30 23:58:33.850031 containerd[1743]: time="2025-10-30T23:58:33.849965321Z" level=error msg="failed to handle container TaskExit event container_id:\"dc0e20d4f61e4a811308f3edfed0b1ccfa2ab37ca307aecd08b605d4dcb53e49\" id:\"dc0e20d4f61e4a811308f3edfed0b1ccfa2ab37ca307aecd08b605d4dcb53e49\" pid:3873 exited_at:{seconds:1761868703 nanos:838703932}" error="failed to stop container: failed to delete task: context deadline exceeded: unknown" Oct 30 23:58:34.912098 containerd[1743]: time="2025-10-30T23:58:34.911951075Z" level=info msg="TaskExit event container_id:\"dc0e20d4f61e4a811308f3edfed0b1ccfa2ab37ca307aecd08b605d4dcb53e49\" id:\"dc0e20d4f61e4a811308f3edfed0b1ccfa2ab37ca307aecd08b605d4dcb53e49\" pid:3873 exited_at:{seconds:1761868703 nanos:838703932}" Oct 30 23:58:35.758463 containerd[1743]: time="2025-10-30T23:58:35.758400097Z" level=error msg="ttrpc: received message on inactive stream" stream=27 Oct 30 23:58:35.759914 containerd[1743]: time="2025-10-30T23:58:35.759652979Z" level=info msg="shim disconnected" id=dc0e20d4f61e4a811308f3edfed0b1ccfa2ab37ca307aecd08b605d4dcb53e49 namespace=k8s.io Oct 30 23:58:35.759914 containerd[1743]: time="2025-10-30T23:58:35.759680379Z" level=warning msg="cleaning up after shim disconnected" id=dc0e20d4f61e4a811308f3edfed0b1ccfa2ab37ca307aecd08b605d4dcb53e49 namespace=k8s.io Oct 30 23:58:35.759914 containerd[1743]: time="2025-10-30T23:58:35.759689499Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 30 23:58:35.770676 containerd[1743]: time="2025-10-30T23:58:35.770639720Z" level=info msg="Ensure that container dc0e20d4f61e4a811308f3edfed0b1ccfa2ab37ca307aecd08b605d4dcb53e49 in task-service has been cleanup successfully" Oct 30 23:58:36.457064 containerd[1743]: time="2025-10-30T23:58:36.456628714Z" level=info msg="CreateContainer within sandbox \"97a31dc29111cdcd1a83e18e2a2c1a8f329779bc876aa17f67ede256eaf5e578\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 30 23:58:36.486620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1691111107.mount: Deactivated successfully. Oct 30 23:58:36.503228 containerd[1743]: time="2025-10-30T23:58:36.503108603Z" level=info msg="CreateContainer within sandbox \"97a31dc29111cdcd1a83e18e2a2c1a8f329779bc876aa17f67ede256eaf5e578\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4fea37136f4a87c8777851fb1c750d46c234c0d698f68332b4e03c4f5e4db20b\"" Oct 30 23:58:36.504929 containerd[1743]: time="2025-10-30T23:58:36.504056045Z" level=info msg="StartContainer for \"4fea37136f4a87c8777851fb1c750d46c234c0d698f68332b4e03c4f5e4db20b\"" Oct 30 23:58:36.535101 systemd[1]: Started cri-containerd-4fea37136f4a87c8777851fb1c750d46c234c0d698f68332b4e03c4f5e4db20b.scope - libcontainer container 4fea37136f4a87c8777851fb1c750d46c234c0d698f68332b4e03c4f5e4db20b. Oct 30 23:58:36.565448 containerd[1743]: time="2025-10-30T23:58:36.565316762Z" level=info msg="StartContainer for \"4fea37136f4a87c8777851fb1c750d46c234c0d698f68332b4e03c4f5e4db20b\" returns successfully" Oct 30 23:58:36.571461 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 30 23:58:36.571688 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 30 23:58:36.572365 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Oct 30 23:58:36.578569 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 30 23:58:36.578753 systemd[1]: cri-containerd-4fea37136f4a87c8777851fb1c750d46c234c0d698f68332b4e03c4f5e4db20b.scope: Deactivated successfully. Oct 30 23:58:36.591190 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 30 23:58:36.623571 containerd[1743]: time="2025-10-30T23:58:36.623488554Z" level=info msg="shim disconnected" id=4fea37136f4a87c8777851fb1c750d46c234c0d698f68332b4e03c4f5e4db20b namespace=k8s.io Oct 30 23:58:36.623850 containerd[1743]: time="2025-10-30T23:58:36.623611034Z" level=warning msg="cleaning up after shim disconnected" id=4fea37136f4a87c8777851fb1c750d46c234c0d698f68332b4e03c4f5e4db20b namespace=k8s.io Oct 30 23:58:36.623850 containerd[1743]: time="2025-10-30T23:58:36.623622394Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 30 23:58:37.461288 containerd[1743]: time="2025-10-30T23:58:37.460377597Z" level=info msg="CreateContainer within sandbox \"97a31dc29111cdcd1a83e18e2a2c1a8f329779bc876aa17f67ede256eaf5e578\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 30 23:58:37.481338 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4fea37136f4a87c8777851fb1c750d46c234c0d698f68332b4e03c4f5e4db20b-rootfs.mount: Deactivated successfully. Oct 30 23:58:37.955822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3657488663.mount: Deactivated successfully. Oct 30 23:58:38.150868 containerd[1743]: time="2025-10-30T23:58:38.150782999Z" level=info msg="CreateContainer within sandbox \"97a31dc29111cdcd1a83e18e2a2c1a8f329779bc876aa17f67ede256eaf5e578\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9165ba30ba874a4970e6c9358991e42f54b76a004ca89455ccebd2fd237a218e\"" Oct 30 23:58:38.152985 containerd[1743]: time="2025-10-30T23:58:38.152048201Z" level=info msg="StartContainer for \"9165ba30ba874a4970e6c9358991e42f54b76a004ca89455ccebd2fd237a218e\"" Oct 30 23:58:38.180029 systemd[1]: Started cri-containerd-9165ba30ba874a4970e6c9358991e42f54b76a004ca89455ccebd2fd237a218e.scope - libcontainer container 9165ba30ba874a4970e6c9358991e42f54b76a004ca89455ccebd2fd237a218e. Oct 30 23:58:38.208127 systemd[1]: cri-containerd-9165ba30ba874a4970e6c9358991e42f54b76a004ca89455ccebd2fd237a218e.scope: Deactivated successfully. Oct 30 23:58:38.311337 containerd[1743]: time="2025-10-30T23:58:38.311225586Z" level=info msg="StartContainer for \"9165ba30ba874a4970e6c9358991e42f54b76a004ca89455ccebd2fd237a218e\" returns successfully" Oct 30 23:58:38.481570 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9165ba30ba874a4970e6c9358991e42f54b76a004ca89455ccebd2fd237a218e-rootfs.mount: Deactivated successfully. Oct 30 23:58:41.455207 containerd[1743]: time="2025-10-30T23:58:41.454815868Z" level=info msg="shim disconnected" id=9165ba30ba874a4970e6c9358991e42f54b76a004ca89455ccebd2fd237a218e namespace=k8s.io Oct 30 23:58:41.455207 containerd[1743]: time="2025-10-30T23:58:41.454876588Z" level=warning msg="cleaning up after shim disconnected" id=9165ba30ba874a4970e6c9358991e42f54b76a004ca89455ccebd2fd237a218e namespace=k8s.io Oct 30 23:58:41.455207 containerd[1743]: time="2025-10-30T23:58:41.454905508Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 30 23:58:41.852305 containerd[1743]: time="2025-10-30T23:58:41.852054804Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:58:41.856249 containerd[1743]: time="2025-10-30T23:58:41.856208810Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Oct 30 23:58:41.901929 containerd[1743]: time="2025-10-30T23:58:41.901504756Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:58:41.902926 containerd[1743]: time="2025-10-30T23:58:41.902651198Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 18.492271287s" Oct 30 23:58:41.902926 containerd[1743]: time="2025-10-30T23:58:41.902687278Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Oct 30 23:58:41.905169 containerd[1743]: time="2025-10-30T23:58:41.905124481Z" level=info msg="CreateContainer within sandbox \"a776050b7b93ab68a7d0a35c103485dbe04650aee84ec3c82c7ee147544f8a32\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 30 23:58:42.061172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1467174138.mount: Deactivated successfully. Oct 30 23:58:42.064437 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1067560567.mount: Deactivated successfully. Oct 30 23:58:42.199546 containerd[1743]: time="2025-10-30T23:58:42.199452988Z" level=info msg="CreateContainer within sandbox \"a776050b7b93ab68a7d0a35c103485dbe04650aee84ec3c82c7ee147544f8a32\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ccfe71fa37cc570992a1875224b091375a829f02414af00a2e6f529d3b8a89d9\"" Oct 30 23:58:42.200346 containerd[1743]: time="2025-10-30T23:58:42.199911749Z" level=info msg="StartContainer for \"ccfe71fa37cc570992a1875224b091375a829f02414af00a2e6f529d3b8a89d9\"" Oct 30 23:58:42.229098 systemd[1]: Started cri-containerd-ccfe71fa37cc570992a1875224b091375a829f02414af00a2e6f529d3b8a89d9.scope - libcontainer container ccfe71fa37cc570992a1875224b091375a829f02414af00a2e6f529d3b8a89d9. Oct 30 23:58:42.257768 containerd[1743]: time="2025-10-30T23:58:42.257723193Z" level=info msg="StartContainer for \"ccfe71fa37cc570992a1875224b091375a829f02414af00a2e6f529d3b8a89d9\" returns successfully" Oct 30 23:58:42.482781 containerd[1743]: time="2025-10-30T23:58:42.482407119Z" level=info msg="CreateContainer within sandbox \"97a31dc29111cdcd1a83e18e2a2c1a8f329779bc876aa17f67ede256eaf5e578\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 30 23:58:42.487231 kubelet[3457]: I1030 23:58:42.487153 3457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-v82mc" podStartSLOduration=4.256294837 podStartE2EDuration="36.487126566s" podCreationTimestamp="2025-10-30 23:58:06 +0000 UTC" firstStartedPulling="2025-10-30 23:58:09.67283007 +0000 UTC m=+9.489726788" lastFinishedPulling="2025-10-30 23:58:41.903661799 +0000 UTC m=+41.720558517" observedRunningTime="2025-10-30 23:58:42.487017366 +0000 UTC m=+42.303914084" watchObservedRunningTime="2025-10-30 23:58:42.487126566 +0000 UTC m=+42.304023284" Oct 30 23:58:42.712277 containerd[1743]: time="2025-10-30T23:58:42.712223212Z" level=info msg="CreateContainer within sandbox \"97a31dc29111cdcd1a83e18e2a2c1a8f329779bc876aa17f67ede256eaf5e578\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"eab839ee3bd9eaff52bc88e81db3ea9c9ecf710e921ab9845c841fe46cf52810\"" Oct 30 23:58:42.714092 containerd[1743]: time="2025-10-30T23:58:42.714056055Z" level=info msg="StartContainer for \"eab839ee3bd9eaff52bc88e81db3ea9c9ecf710e921ab9845c841fe46cf52810\"" Oct 30 23:58:42.747246 systemd[1]: Started cri-containerd-eab839ee3bd9eaff52bc88e81db3ea9c9ecf710e921ab9845c841fe46cf52810.scope - libcontainer container eab839ee3bd9eaff52bc88e81db3ea9c9ecf710e921ab9845c841fe46cf52810. Oct 30 23:58:42.791343 systemd[1]: cri-containerd-eab839ee3bd9eaff52bc88e81db3ea9c9ecf710e921ab9845c841fe46cf52810.scope: Deactivated successfully. Oct 30 23:58:42.797030 containerd[1743]: time="2025-10-30T23:58:42.796976855Z" level=info msg="StartContainer for \"eab839ee3bd9eaff52bc88e81db3ea9c9ecf710e921ab9845c841fe46cf52810\" returns successfully" Oct 30 23:58:43.058415 systemd[1]: run-containerd-runc-k8s.io-ccfe71fa37cc570992a1875224b091375a829f02414af00a2e6f529d3b8a89d9-runc.1k5O4c.mount: Deactivated successfully. Oct 30 23:58:43.903767 containerd[1743]: time="2025-10-30T23:58:43.903627141Z" level=info msg="shim disconnected" id=eab839ee3bd9eaff52bc88e81db3ea9c9ecf710e921ab9845c841fe46cf52810 namespace=k8s.io Oct 30 23:58:43.903767 containerd[1743]: time="2025-10-30T23:58:43.903762462Z" level=warning msg="cleaning up after shim disconnected" id=eab839ee3bd9eaff52bc88e81db3ea9c9ecf710e921ab9845c841fe46cf52810 namespace=k8s.io Oct 30 23:58:43.903767 containerd[1743]: time="2025-10-30T23:58:43.903772342Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 30 23:58:44.490766 containerd[1743]: time="2025-10-30T23:58:44.490603763Z" level=info msg="CreateContainer within sandbox \"97a31dc29111cdcd1a83e18e2a2c1a8f329779bc876aa17f67ede256eaf5e578\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 30 23:58:44.711924 containerd[1743]: time="2025-10-30T23:58:44.711534587Z" level=info msg="CreateContainer within sandbox \"97a31dc29111cdcd1a83e18e2a2c1a8f329779bc876aa17f67ede256eaf5e578\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2d341e62615987e35eb637b4a126c1cf410d7d836c844e7ddaf98b60fcff5d41\"" Oct 30 23:58:44.712285 containerd[1743]: time="2025-10-30T23:58:44.712260308Z" level=info msg="StartContainer for \"2d341e62615987e35eb637b4a126c1cf410d7d836c844e7ddaf98b60fcff5d41\"" Oct 30 23:58:44.749048 systemd[1]: Started cri-containerd-2d341e62615987e35eb637b4a126c1cf410d7d836c844e7ddaf98b60fcff5d41.scope - libcontainer container 2d341e62615987e35eb637b4a126c1cf410d7d836c844e7ddaf98b60fcff5d41. Oct 30 23:58:44.789399 containerd[1743]: time="2025-10-30T23:58:44.789338896Z" level=info msg="StartContainer for \"2d341e62615987e35eb637b4a126c1cf410d7d836c844e7ddaf98b60fcff5d41\" returns successfully" Oct 30 23:58:44.874829 kubelet[3457]: I1030 23:58:44.874620 3457 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 30 23:58:44.926844 systemd[1]: Created slice kubepods-burstable-podee446cf3_5580_4616_b928_26f4b4fa6c76.slice - libcontainer container kubepods-burstable-podee446cf3_5580_4616_b928_26f4b4fa6c76.slice. Oct 30 23:58:44.941350 systemd[1]: Created slice kubepods-burstable-pod16d5c970_388d_4013_a4df_7a02088be9e1.slice - libcontainer container kubepods-burstable-pod16d5c970_388d_4013_a4df_7a02088be9e1.slice. Oct 30 23:58:44.990287 kubelet[3457]: I1030 23:58:44.990228 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16d5c970-388d-4013-a4df-7a02088be9e1-config-volume\") pod \"coredns-668d6bf9bc-792dc\" (UID: \"16d5c970-388d-4013-a4df-7a02088be9e1\") " pod="kube-system/coredns-668d6bf9bc-792dc" Oct 30 23:58:44.990432 kubelet[3457]: I1030 23:58:44.990324 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlxtv\" (UniqueName: \"kubernetes.io/projected/ee446cf3-5580-4616-b928-26f4b4fa6c76-kube-api-access-nlxtv\") pod \"coredns-668d6bf9bc-2p9mg\" (UID: \"ee446cf3-5580-4616-b928-26f4b4fa6c76\") " pod="kube-system/coredns-668d6bf9bc-2p9mg" Oct 30 23:58:44.990432 kubelet[3457]: I1030 23:58:44.990345 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5444r\" (UniqueName: \"kubernetes.io/projected/16d5c970-388d-4013-a4df-7a02088be9e1-kube-api-access-5444r\") pod \"coredns-668d6bf9bc-792dc\" (UID: \"16d5c970-388d-4013-a4df-7a02088be9e1\") " pod="kube-system/coredns-668d6bf9bc-792dc" Oct 30 23:58:44.990432 kubelet[3457]: I1030 23:58:44.990361 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee446cf3-5580-4616-b928-26f4b4fa6c76-config-volume\") pod \"coredns-668d6bf9bc-2p9mg\" (UID: \"ee446cf3-5580-4616-b928-26f4b4fa6c76\") " pod="kube-system/coredns-668d6bf9bc-2p9mg" Oct 30 23:58:45.233317 containerd[1743]: time="2025-10-30T23:58:45.233009665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2p9mg,Uid:ee446cf3-5580-4616-b928-26f4b4fa6c76,Namespace:kube-system,Attempt:0,}" Oct 30 23:58:45.246986 containerd[1743]: time="2025-10-30T23:58:45.246944292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-792dc,Uid:16d5c970-388d-4013-a4df-7a02088be9e1,Namespace:kube-system,Attempt:0,}" Oct 30 23:58:47.571585 systemd-networkd[1341]: cilium_host: Link UP Oct 30 23:58:47.571715 systemd-networkd[1341]: cilium_net: Link UP Oct 30 23:58:47.571718 systemd-networkd[1341]: cilium_net: Gained carrier Oct 30 23:58:47.571856 systemd-networkd[1341]: cilium_host: Gained carrier Oct 30 23:58:47.769687 systemd-networkd[1341]: cilium_vxlan: Link UP Oct 30 23:58:47.769699 systemd-networkd[1341]: cilium_vxlan: Gained carrier Oct 30 23:58:47.925006 systemd-networkd[1341]: cilium_net: Gained IPv6LL Oct 30 23:58:48.045042 systemd-networkd[1341]: cilium_host: Gained IPv6LL Oct 30 23:58:48.087047 kernel: NET: Registered PF_ALG protocol family Oct 30 23:58:48.974581 systemd-networkd[1341]: lxc_health: Link UP Oct 30 23:58:48.984581 systemd-networkd[1341]: lxc_health: Gained carrier Oct 30 23:58:49.309028 systemd-networkd[1341]: cilium_vxlan: Gained IPv6LL Oct 30 23:58:49.378866 systemd-networkd[1341]: lxc6971070313bf: Link UP Oct 30 23:58:49.386930 kernel: eth0: renamed from tmp27772 Oct 30 23:58:49.391066 systemd-networkd[1341]: lxc6971070313bf: Gained carrier Oct 30 23:58:49.480252 systemd-networkd[1341]: lxcbf103fd3c2ac: Link UP Oct 30 23:58:49.481921 kernel: eth0: renamed from tmp37907 Oct 30 23:58:49.489750 systemd-networkd[1341]: lxcbf103fd3c2ac: Gained carrier Oct 30 23:58:49.782571 kubelet[3457]: I1030 23:58:49.782085 3457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-t76xf" podStartSLOduration=30.881494853 podStartE2EDuration="44.782068203s" podCreationTimestamp="2025-10-30 23:58:05 +0000 UTC" firstStartedPulling="2025-10-30 23:58:09.508820199 +0000 UTC m=+9.325716917" lastFinishedPulling="2025-10-30 23:58:23.409393549 +0000 UTC m=+23.226290267" observedRunningTime="2025-10-30 23:58:45.516252208 +0000 UTC m=+45.333148926" watchObservedRunningTime="2025-10-30 23:58:49.782068203 +0000 UTC m=+49.598964921" Oct 30 23:58:50.270082 systemd-networkd[1341]: lxc_health: Gained IPv6LL Oct 30 23:58:50.909101 systemd-networkd[1341]: lxcbf103fd3c2ac: Gained IPv6LL Oct 30 23:58:51.293042 systemd-networkd[1341]: lxc6971070313bf: Gained IPv6LL Oct 30 23:58:53.064325 containerd[1743]: time="2025-10-30T23:58:53.064219763Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 30 23:58:53.065006 containerd[1743]: time="2025-10-30T23:58:53.064913565Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 30 23:58:53.065006 containerd[1743]: time="2025-10-30T23:58:53.065057525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 30 23:58:53.065579 containerd[1743]: time="2025-10-30T23:58:53.065512886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 30 23:58:53.092032 systemd[1]: Started cri-containerd-379072b93b4c207ae59aa1f80dcdd3148a3eee9aee557b169ca161df9e5cc7a0.scope - libcontainer container 379072b93b4c207ae59aa1f80dcdd3148a3eee9aee557b169ca161df9e5cc7a0. Oct 30 23:58:53.112728 containerd[1743]: time="2025-10-30T23:58:53.111460773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 30 23:58:53.112728 containerd[1743]: time="2025-10-30T23:58:53.111520053Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 30 23:58:53.112728 containerd[1743]: time="2025-10-30T23:58:53.111534733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 30 23:58:53.114013 containerd[1743]: time="2025-10-30T23:58:53.113893018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 30 23:58:53.134139 systemd[1]: Started cri-containerd-277720f9eb3ff268c4671d3d943c3ba25cc38ae0c5b9b9cb6f30bbd8b8d7dc48.scope - libcontainer container 277720f9eb3ff268c4671d3d943c3ba25cc38ae0c5b9b9cb6f30bbd8b8d7dc48. Oct 30 23:58:53.161360 containerd[1743]: time="2025-10-30T23:58:53.161169628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-792dc,Uid:16d5c970-388d-4013-a4df-7a02088be9e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"379072b93b4c207ae59aa1f80dcdd3148a3eee9aee557b169ca161df9e5cc7a0\"" Oct 30 23:58:53.167540 containerd[1743]: time="2025-10-30T23:58:53.167241599Z" level=info msg="CreateContainer within sandbox \"379072b93b4c207ae59aa1f80dcdd3148a3eee9aee557b169ca161df9e5cc7a0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 30 23:58:53.189055 containerd[1743]: time="2025-10-30T23:58:53.188992201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2p9mg,Uid:ee446cf3-5580-4616-b928-26f4b4fa6c76,Namespace:kube-system,Attempt:0,} returns sandbox id \"277720f9eb3ff268c4671d3d943c3ba25cc38ae0c5b9b9cb6f30bbd8b8d7dc48\"" Oct 30 23:58:53.192608 containerd[1743]: time="2025-10-30T23:58:53.192579847Z" level=info msg="CreateContainer within sandbox \"277720f9eb3ff268c4671d3d943c3ba25cc38ae0c5b9b9cb6f30bbd8b8d7dc48\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 30 23:58:53.653263 containerd[1743]: time="2025-10-30T23:58:53.653206483Z" level=info msg="CreateContainer within sandbox \"379072b93b4c207ae59aa1f80dcdd3148a3eee9aee557b169ca161df9e5cc7a0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dbaa9dcbbf2168d3ad89bf225d773a155e75f3a58f5ed67b51052eb996e0591b\"" Oct 30 23:58:53.654173 containerd[1743]: time="2025-10-30T23:58:53.654139765Z" level=info msg="StartContainer for \"dbaa9dcbbf2168d3ad89bf225d773a155e75f3a58f5ed67b51052eb996e0591b\"" Oct 30 23:58:53.684069 systemd[1]: Started cri-containerd-dbaa9dcbbf2168d3ad89bf225d773a155e75f3a58f5ed67b51052eb996e0591b.scope - libcontainer container dbaa9dcbbf2168d3ad89bf225d773a155e75f3a58f5ed67b51052eb996e0591b. Oct 30 23:58:53.693656 containerd[1743]: time="2025-10-30T23:58:53.693610920Z" level=info msg="CreateContainer within sandbox \"277720f9eb3ff268c4671d3d943c3ba25cc38ae0c5b9b9cb6f30bbd8b8d7dc48\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c0d623dc11a03da228bf2ed1e557fffbb94aaf445455f387a72036c30371eed0\"" Oct 30 23:58:53.695327 containerd[1743]: time="2025-10-30T23:58:53.695292883Z" level=info msg="StartContainer for \"c0d623dc11a03da228bf2ed1e557fffbb94aaf445455f387a72036c30371eed0\"" Oct 30 23:58:53.723528 containerd[1743]: time="2025-10-30T23:58:53.723405257Z" level=info msg="StartContainer for \"dbaa9dcbbf2168d3ad89bf225d773a155e75f3a58f5ed67b51052eb996e0591b\" returns successfully" Oct 30 23:58:53.735072 systemd[1]: Started cri-containerd-c0d623dc11a03da228bf2ed1e557fffbb94aaf445455f387a72036c30371eed0.scope - libcontainer container c0d623dc11a03da228bf2ed1e557fffbb94aaf445455f387a72036c30371eed0. Oct 30 23:58:53.780212 containerd[1743]: time="2025-10-30T23:58:53.780082125Z" level=info msg="StartContainer for \"c0d623dc11a03da228bf2ed1e557fffbb94aaf445455f387a72036c30371eed0\" returns successfully" Oct 30 23:58:54.528149 kubelet[3457]: I1030 23:58:54.528087 3457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-792dc" podStartSLOduration=48.527974426 podStartE2EDuration="48.527974426s" podCreationTimestamp="2025-10-30 23:58:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 23:58:54.525474782 +0000 UTC m=+54.342371500" watchObservedRunningTime="2025-10-30 23:58:54.527974426 +0000 UTC m=+54.344871144" Oct 31 00:00:03.779212 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. Oct 31 00:00:03.821848 systemd[1]: logrotate.service: Deactivated successfully. Oct 31 00:00:21.669764 update_engine[1721]: I20251031 00:00:21.669703 1721 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Oct 31 00:00:21.669764 update_engine[1721]: I20251031 00:00:21.669754 1721 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Oct 31 00:00:21.670195 update_engine[1721]: I20251031 00:00:21.669973 1721 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Oct 31 00:00:21.670387 update_engine[1721]: I20251031 00:00:21.670356 1721 omaha_request_params.cc:62] Current group set to stable Oct 31 00:00:21.670478 update_engine[1721]: I20251031 00:00:21.670457 1721 update_attempter.cc:499] Already updated boot flags. Skipping. Oct 31 00:00:21.670478 update_engine[1721]: I20251031 00:00:21.670471 1721 update_attempter.cc:643] Scheduling an action processor start. Oct 31 00:00:21.670523 update_engine[1721]: I20251031 00:00:21.670490 1721 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 31 00:00:21.670543 update_engine[1721]: I20251031 00:00:21.670519 1721 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Oct 31 00:00:21.670694 update_engine[1721]: I20251031 00:00:21.670579 1721 omaha_request_action.cc:271] Posting an Omaha request to disabled Oct 31 00:00:21.670694 update_engine[1721]: I20251031 00:00:21.670593 1721 omaha_request_action.cc:272] Request: Oct 31 00:00:21.670694 update_engine[1721]: Oct 31 00:00:21.670694 update_engine[1721]: Oct 31 00:00:21.670694 update_engine[1721]: Oct 31 00:00:21.670694 update_engine[1721]: Oct 31 00:00:21.670694 update_engine[1721]: Oct 31 00:00:21.670694 update_engine[1721]: Oct 31 00:00:21.670694 update_engine[1721]: Oct 31 00:00:21.670694 update_engine[1721]: Oct 31 00:00:21.670694 update_engine[1721]: I20251031 00:00:21.670599 1721 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 31 00:00:21.671101 locksmithd[1848]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Oct 31 00:00:21.671903 update_engine[1721]: I20251031 00:00:21.671849 1721 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 31 00:00:21.672258 update_engine[1721]: I20251031 00:00:21.672224 1721 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 31 00:00:21.780613 update_engine[1721]: E20251031 00:00:21.780550 1721 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 31 00:00:21.780743 update_engine[1721]: I20251031 00:00:21.780663 1721 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Oct 31 00:00:31.597087 update_engine[1721]: I20251031 00:00:31.597015 1721 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 31 00:00:31.597424 update_engine[1721]: I20251031 00:00:31.597254 1721 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 31 00:00:31.597541 update_engine[1721]: I20251031 00:00:31.597507 1721 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 31 00:00:31.639093 update_engine[1721]: E20251031 00:00:31.639035 1721 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 31 00:00:31.639213 update_engine[1721]: I20251031 00:00:31.639128 1721 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Oct 31 00:00:41.597005 update_engine[1721]: I20251031 00:00:41.596915 1721 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 31 00:00:41.597460 update_engine[1721]: I20251031 00:00:41.597194 1721 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 31 00:00:41.597460 update_engine[1721]: I20251031 00:00:41.597445 1721 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 31 00:00:41.649840 update_engine[1721]: E20251031 00:00:41.649717 1721 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 31 00:00:41.649840 update_engine[1721]: I20251031 00:00:41.649807 1721 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Oct 31 00:00:46.423388 systemd[1]: Started sshd@7-10.200.20.15:22-10.200.16.10:43992.service - OpenSSH per-connection server daemon (10.200.16.10:43992). Oct 31 00:00:46.918415 sshd[4874]: Accepted publickey for core from 10.200.16.10 port 43992 ssh2: RSA SHA256:nBkLfspKkDLqOT9SkkASHbt5c8U+GcTwmvPM6OoKUzI Oct 31 00:00:46.919727 sshd-session[4874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:00:46.924982 systemd-logind[1718]: New session 10 of user core. Oct 31 00:00:46.933108 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 31 00:00:47.415330 sshd[4876]: Connection closed by 10.200.16.10 port 43992 Oct 31 00:00:47.415914 sshd-session[4874]: pam_unix(sshd:session): session closed for user core Oct 31 00:00:47.418524 systemd[1]: sshd@7-10.200.20.15:22-10.200.16.10:43992.service: Deactivated successfully. Oct 31 00:00:47.420685 systemd[1]: session-10.scope: Deactivated successfully. Oct 31 00:00:47.421521 systemd-logind[1718]: Session 10 logged out. Waiting for processes to exit. Oct 31 00:00:47.423355 systemd-logind[1718]: Removed session 10. Oct 31 00:00:51.594914 update_engine[1721]: I20251031 00:00:51.594492 1721 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 31 00:00:51.594914 update_engine[1721]: I20251031 00:00:51.594762 1721 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 31 00:00:51.595280 update_engine[1721]: I20251031 00:00:51.595038 1721 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 31 00:00:51.683573 update_engine[1721]: E20251031 00:00:51.683516 1721 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 31 00:00:51.683707 update_engine[1721]: I20251031 00:00:51.683604 1721 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Oct 31 00:00:51.683707 update_engine[1721]: I20251031 00:00:51.683614 1721 omaha_request_action.cc:617] Omaha request response: Oct 31 00:00:51.683707 update_engine[1721]: E20251031 00:00:51.683696 1721 omaha_request_action.cc:636] Omaha request network transfer failed. Oct 31 00:00:51.683768 update_engine[1721]: I20251031 00:00:51.683712 1721 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Oct 31 00:00:51.683768 update_engine[1721]: I20251031 00:00:51.683719 1721 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 31 00:00:51.683768 update_engine[1721]: I20251031 00:00:51.683723 1721 update_attempter.cc:306] Processing Done. Oct 31 00:00:51.683768 update_engine[1721]: E20251031 00:00:51.683737 1721 update_attempter.cc:619] Update failed. Oct 31 00:00:51.683768 update_engine[1721]: I20251031 00:00:51.683743 1721 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Oct 31 00:00:51.683768 update_engine[1721]: I20251031 00:00:51.683748 1721 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Oct 31 00:00:51.683768 update_engine[1721]: I20251031 00:00:51.683751 1721 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Oct 31 00:00:51.684101 update_engine[1721]: I20251031 00:00:51.683960 1721 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 31 00:00:51.684101 update_engine[1721]: I20251031 00:00:51.683998 1721 omaha_request_action.cc:271] Posting an Omaha request to disabled Oct 31 00:00:51.684101 update_engine[1721]: I20251031 00:00:51.684007 1721 omaha_request_action.cc:272] Request: Oct 31 00:00:51.684101 update_engine[1721]: Oct 31 00:00:51.684101 update_engine[1721]: Oct 31 00:00:51.684101 update_engine[1721]: Oct 31 00:00:51.684101 update_engine[1721]: Oct 31 00:00:51.684101 update_engine[1721]: Oct 31 00:00:51.684101 update_engine[1721]: Oct 31 00:00:51.684101 update_engine[1721]: I20251031 00:00:51.684013 1721 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 31 00:00:51.684295 locksmithd[1848]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Oct 31 00:00:51.684534 update_engine[1721]: I20251031 00:00:51.684151 1721 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 31 00:00:51.684534 update_engine[1721]: I20251031 00:00:51.684365 1721 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 31 00:00:51.970032 update_engine[1721]: E20251031 00:00:51.969975 1721 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 31 00:00:51.970209 update_engine[1721]: I20251031 00:00:51.970062 1721 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Oct 31 00:00:51.970209 update_engine[1721]: I20251031 00:00:51.970073 1721 omaha_request_action.cc:617] Omaha request response: Oct 31 00:00:51.970209 update_engine[1721]: I20251031 00:00:51.970080 1721 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 31 00:00:51.970209 update_engine[1721]: I20251031 00:00:51.970088 1721 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 31 00:00:51.970209 update_engine[1721]: I20251031 00:00:51.970093 1721 update_attempter.cc:306] Processing Done. Oct 31 00:00:51.970209 update_engine[1721]: I20251031 00:00:51.970099 1721 update_attempter.cc:310] Error event sent. Oct 31 00:00:51.970209 update_engine[1721]: I20251031 00:00:51.970110 1721 update_check_scheduler.cc:74] Next update check in 43m6s Oct 31 00:00:51.970573 locksmithd[1848]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Oct 31 00:00:52.502324 systemd[1]: Started sshd@8-10.200.20.15:22-10.200.16.10:38628.service - OpenSSH per-connection server daemon (10.200.16.10:38628). Oct 31 00:00:52.970426 sshd[4889]: Accepted publickey for core from 10.200.16.10 port 38628 ssh2: RSA SHA256:nBkLfspKkDLqOT9SkkASHbt5c8U+GcTwmvPM6OoKUzI Oct 31 00:00:52.971799 sshd-session[4889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:00:52.976844 systemd-logind[1718]: New session 11 of user core. Oct 31 00:00:52.980095 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 31 00:00:53.365135 sshd[4891]: Connection closed by 10.200.16.10 port 38628 Oct 31 00:00:53.366097 sshd-session[4889]: pam_unix(sshd:session): session closed for user core Oct 31 00:00:53.370442 systemd-logind[1718]: Session 11 logged out. Waiting for processes to exit. Oct 31 00:00:53.370687 systemd[1]: sshd@8-10.200.20.15:22-10.200.16.10:38628.service: Deactivated successfully. Oct 31 00:00:53.374321 systemd[1]: session-11.scope: Deactivated successfully. Oct 31 00:00:53.375332 systemd-logind[1718]: Removed session 11. Oct 31 00:00:58.440467 systemd[1]: Started sshd@9-10.200.20.15:22-10.200.16.10:38642.service - OpenSSH per-connection server daemon (10.200.16.10:38642). Oct 31 00:00:58.863349 sshd[4907]: Accepted publickey for core from 10.200.16.10 port 38642 ssh2: RSA SHA256:nBkLfspKkDLqOT9SkkASHbt5c8U+GcTwmvPM6OoKUzI Oct 31 00:00:58.864623 sshd-session[4907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:00:58.869298 systemd-logind[1718]: New session 12 of user core. Oct 31 00:00:58.876112 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 31 00:00:59.233988 sshd[4909]: Connection closed by 10.200.16.10 port 38642 Oct 31 00:00:59.234744 sshd-session[4907]: pam_unix(sshd:session): session closed for user core Oct 31 00:00:59.237559 systemd-logind[1718]: Session 12 logged out. Waiting for processes to exit. Oct 31 00:00:59.238492 systemd[1]: sshd@9-10.200.20.15:22-10.200.16.10:38642.service: Deactivated successfully. Oct 31 00:00:59.241544 systemd[1]: session-12.scope: Deactivated successfully. Oct 31 00:00:59.243452 systemd-logind[1718]: Removed session 12. Oct 31 00:01:04.333228 systemd[1]: Started sshd@10-10.200.20.15:22-10.200.16.10:57696.service - OpenSSH per-connection server daemon (10.200.16.10:57696). Oct 31 00:01:04.803725 sshd[4923]: Accepted publickey for core from 10.200.16.10 port 57696 ssh2: RSA SHA256:nBkLfspKkDLqOT9SkkASHbt5c8U+GcTwmvPM6OoKUzI Oct 31 00:01:04.804591 sshd-session[4923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:01:04.809432 systemd-logind[1718]: New session 13 of user core. Oct 31 00:01:04.817079 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 31 00:01:05.204192 sshd[4925]: Connection closed by 10.200.16.10 port 57696 Oct 31 00:01:05.204102 sshd-session[4923]: pam_unix(sshd:session): session closed for user core Oct 31 00:01:05.207761 systemd[1]: sshd@10-10.200.20.15:22-10.200.16.10:57696.service: Deactivated successfully. Oct 31 00:01:05.209829 systemd[1]: session-13.scope: Deactivated successfully. Oct 31 00:01:05.211249 systemd-logind[1718]: Session 13 logged out. Waiting for processes to exit. Oct 31 00:01:05.212371 systemd-logind[1718]: Removed session 13. Oct 31 00:01:10.293163 systemd[1]: Started sshd@11-10.200.20.15:22-10.200.16.10:42832.service - OpenSSH per-connection server daemon (10.200.16.10:42832). Oct 31 00:01:10.764917 sshd[4940]: Accepted publickey for core from 10.200.16.10 port 42832 ssh2: RSA SHA256:nBkLfspKkDLqOT9SkkASHbt5c8U+GcTwmvPM6OoKUzI Oct 31 00:01:10.765571 sshd-session[4940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:01:10.769571 systemd-logind[1718]: New session 14 of user core. Oct 31 00:01:10.773046 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 31 00:01:11.169116 sshd[4942]: Connection closed by 10.200.16.10 port 42832 Oct 31 00:01:11.169027 sshd-session[4940]: pam_unix(sshd:session): session closed for user core Oct 31 00:01:11.173077 systemd-logind[1718]: Session 14 logged out. Waiting for processes to exit. Oct 31 00:01:11.173243 systemd[1]: sshd@11-10.200.20.15:22-10.200.16.10:42832.service: Deactivated successfully. Oct 31 00:01:11.175355 systemd[1]: session-14.scope: Deactivated successfully. Oct 31 00:01:11.176430 systemd-logind[1718]: Removed session 14. Oct 31 00:01:16.268234 systemd[1]: Started sshd@12-10.200.20.15:22-10.200.16.10:42840.service - OpenSSH per-connection server daemon (10.200.16.10:42840). Oct 31 00:01:16.763187 sshd[4955]: Accepted publickey for core from 10.200.16.10 port 42840 ssh2: RSA SHA256:nBkLfspKkDLqOT9SkkASHbt5c8U+GcTwmvPM6OoKUzI Oct 31 00:01:16.764134 sshd-session[4955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:01:16.768213 systemd-logind[1718]: New session 15 of user core. Oct 31 00:01:16.781053 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 31 00:01:17.203520 sshd[4957]: Connection closed by 10.200.16.10 port 42840 Oct 31 00:01:17.204152 sshd-session[4955]: pam_unix(sshd:session): session closed for user core Oct 31 00:01:17.207722 systemd[1]: sshd@12-10.200.20.15:22-10.200.16.10:42840.service: Deactivated successfully. Oct 31 00:01:17.209614 systemd[1]: session-15.scope: Deactivated successfully. Oct 31 00:01:17.211273 systemd-logind[1718]: Session 15 logged out. Waiting for processes to exit. Oct 31 00:01:17.212709 systemd-logind[1718]: Removed session 15. Oct 31 00:01:22.291234 systemd[1]: Started sshd@13-10.200.20.15:22-10.200.16.10:39082.service - OpenSSH per-connection server daemon (10.200.16.10:39082). Oct 31 00:01:22.747536 sshd[4969]: Accepted publickey for core from 10.200.16.10 port 39082 ssh2: RSA SHA256:nBkLfspKkDLqOT9SkkASHbt5c8U+GcTwmvPM6OoKUzI Oct 31 00:01:22.749062 sshd-session[4969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:01:22.755158 systemd-logind[1718]: New session 16 of user core. Oct 31 00:01:22.762094 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 31 00:01:23.153437 sshd[4971]: Connection closed by 10.200.16.10 port 39082 Oct 31 00:01:23.154057 sshd-session[4969]: pam_unix(sshd:session): session closed for user core Oct 31 00:01:23.158129 systemd-logind[1718]: Session 16 logged out. Waiting for processes to exit. Oct 31 00:01:23.158404 systemd[1]: sshd@13-10.200.20.15:22-10.200.16.10:39082.service: Deactivated successfully. Oct 31 00:01:23.160792 systemd[1]: session-16.scope: Deactivated successfully. Oct 31 00:01:23.161960 systemd-logind[1718]: Removed session 16. Oct 31 00:01:28.228198 systemd[1]: Started sshd@14-10.200.20.15:22-10.200.16.10:39094.service - OpenSSH per-connection server daemon (10.200.16.10:39094). Oct 31 00:01:28.649943 sshd[4984]: Accepted publickey for core from 10.200.16.10 port 39094 ssh2: RSA SHA256:nBkLfspKkDLqOT9SkkASHbt5c8U+GcTwmvPM6OoKUzI Oct 31 00:01:28.651286 sshd-session[4984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:01:28.655468 systemd-logind[1718]: New session 17 of user core. Oct 31 00:01:28.662268 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 31 00:01:29.022964 sshd[4986]: Connection closed by 10.200.16.10 port 39094 Oct 31 00:01:29.022372 sshd-session[4984]: pam_unix(sshd:session): session closed for user core Oct 31 00:01:29.025461 systemd-logind[1718]: Session 17 logged out. Waiting for processes to exit. Oct 31 00:01:29.025792 systemd[1]: sshd@14-10.200.20.15:22-10.200.16.10:39094.service: Deactivated successfully. Oct 31 00:01:29.027593 systemd[1]: session-17.scope: Deactivated successfully. Oct 31 00:01:29.033248 systemd-logind[1718]: Removed session 17. Oct 31 00:01:34.115163 systemd[1]: Started sshd@15-10.200.20.15:22-10.200.16.10:59758.service - OpenSSH per-connection server daemon (10.200.16.10:59758). Oct 31 00:01:34.580540 sshd[4998]: Accepted publickey for core from 10.200.16.10 port 59758 ssh2: RSA SHA256:nBkLfspKkDLqOT9SkkASHbt5c8U+GcTwmvPM6OoKUzI Oct 31 00:01:34.582005 sshd-session[4998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:01:34.585995 systemd-logind[1718]: New session 18 of user core. Oct 31 00:01:34.597186 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 31 00:01:34.981406 sshd[5000]: Connection closed by 10.200.16.10 port 59758 Oct 31 00:01:34.980754 sshd-session[4998]: pam_unix(sshd:session): session closed for user core Oct 31 00:01:34.984191 systemd[1]: sshd@15-10.200.20.15:22-10.200.16.10:59758.service: Deactivated successfully. Oct 31 00:01:34.986184 systemd[1]: session-18.scope: Deactivated successfully. Oct 31 00:01:34.986974 systemd-logind[1718]: Session 18 logged out. Waiting for processes to exit. Oct 31 00:01:34.987981 systemd-logind[1718]: Removed session 18. Oct 31 00:01:35.074159 systemd[1]: Started sshd@16-10.200.20.15:22-10.200.16.10:59762.service - OpenSSH per-connection server daemon (10.200.16.10:59762). Oct 31 00:01:35.531012 sshd[5013]: Accepted publickey for core from 10.200.16.10 port 59762 ssh2: RSA SHA256:nBkLfspKkDLqOT9SkkASHbt5c8U+GcTwmvPM6OoKUzI Oct 31 00:01:35.532331 sshd-session[5013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:01:35.536428 systemd-logind[1718]: New session 19 of user core. Oct 31 00:01:35.544065 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 31 00:01:35.955506 sshd[5015]: Connection closed by 10.200.16.10 port 59762 Oct 31 00:01:35.955939 sshd-session[5013]: pam_unix(sshd:session): session closed for user core Oct 31 00:01:35.959056 systemd[1]: sshd@16-10.200.20.15:22-10.200.16.10:59762.service: Deactivated successfully. Oct 31 00:01:35.960698 systemd[1]: session-19.scope: Deactivated successfully. Oct 31 00:01:35.962816 systemd-logind[1718]: Session 19 logged out. Waiting for processes to exit. Oct 31 00:01:35.964389 systemd-logind[1718]: Removed session 19. Oct 31 00:01:36.038158 systemd[1]: Started sshd@17-10.200.20.15:22-10.200.16.10:59778.service - OpenSSH per-connection server daemon (10.200.16.10:59778). Oct 31 00:01:36.454811 sshd[5024]: Accepted publickey for core from 10.200.16.10 port 59778 ssh2: RSA SHA256:nBkLfspKkDLqOT9SkkASHbt5c8U+GcTwmvPM6OoKUzI Oct 31 00:01:36.456425 sshd-session[5024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:01:36.460954 systemd-logind[1718]: New session 20 of user core. Oct 31 00:01:36.465038 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 31 00:01:36.848012 sshd[5026]: Connection closed by 10.200.16.10 port 59778 Oct 31 00:01:36.848395 sshd-session[5024]: pam_unix(sshd:session): session closed for user core Oct 31 00:01:36.851359 systemd-logind[1718]: Session 20 logged out. Waiting for processes to exit. Oct 31 00:01:36.851556 systemd[1]: sshd@17-10.200.20.15:22-10.200.16.10:59778.service: Deactivated successfully. Oct 31 00:01:36.853759 systemd[1]: session-20.scope: Deactivated successfully. Oct 31 00:01:36.855735 systemd-logind[1718]: Removed session 20. Oct 31 00:01:41.937163 systemd[1]: Started sshd@18-10.200.20.15:22-10.200.16.10:59934.service - OpenSSH per-connection server daemon (10.200.16.10:59934). Oct 31 00:01:42.395765 sshd[5039]: Accepted publickey for core from 10.200.16.10 port 59934 ssh2: RSA SHA256:nBkLfspKkDLqOT9SkkASHbt5c8U+GcTwmvPM6OoKUzI Oct 31 00:01:42.396630 sshd-session[5039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:01:42.402024 systemd-logind[1718]: New session 21 of user core. Oct 31 00:01:42.406059 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 31 00:01:42.793269 sshd[5041]: Connection closed by 10.200.16.10 port 59934 Oct 31 00:01:42.792694 sshd-session[5039]: pam_unix(sshd:session): session closed for user core Oct 31 00:01:42.796478 systemd[1]: sshd@18-10.200.20.15:22-10.200.16.10:59934.service: Deactivated successfully. Oct 31 00:01:42.798647 systemd[1]: session-21.scope: Deactivated successfully. Oct 31 00:01:42.799868 systemd-logind[1718]: Session 21 logged out. Waiting for processes to exit. Oct 31 00:01:42.800766 systemd-logind[1718]: Removed session 21. Oct 31 00:01:47.869128 systemd[1]: Started sshd@19-10.200.20.15:22-10.200.16.10:59950.service - OpenSSH per-connection server daemon (10.200.16.10:59950). Oct 31 00:01:48.296921 sshd[5052]: Accepted publickey for core from 10.200.16.10 port 59950 ssh2: RSA SHA256:nBkLfspKkDLqOT9SkkASHbt5c8U+GcTwmvPM6OoKUzI Oct 31 00:01:48.297960 sshd-session[5052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:01:48.302629 systemd-logind[1718]: New session 22 of user core. Oct 31 00:01:48.306038 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 31 00:01:48.682910 sshd[5054]: Connection closed by 10.200.16.10 port 59950 Oct 31 00:01:48.683467 sshd-session[5052]: pam_unix(sshd:session): session closed for user core Oct 31 00:01:48.687122 systemd-logind[1718]: Session 22 logged out. Waiting for processes to exit. Oct 31 00:01:48.687676 systemd[1]: sshd@19-10.200.20.15:22-10.200.16.10:59950.service: Deactivated successfully. Oct 31 00:01:48.690554 systemd[1]: session-22.scope: Deactivated successfully. Oct 31 00:01:48.691712 systemd-logind[1718]: Removed session 22. Oct 31 00:01:53.782497 systemd[1]: Started sshd@20-10.200.20.15:22-10.200.16.10:49638.service - OpenSSH per-connection server daemon (10.200.16.10:49638). Oct 31 00:01:54.235928 sshd[5066]: Accepted publickey for core from 10.200.16.10 port 49638 ssh2: RSA SHA256:nBkLfspKkDLqOT9SkkASHbt5c8U+GcTwmvPM6OoKUzI Oct 31 00:01:54.237227 sshd-session[5066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:01:54.241791 systemd-logind[1718]: New session 23 of user core. Oct 31 00:01:54.245091 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 31 00:01:54.632996 sshd[5068]: Connection closed by 10.200.16.10 port 49638 Oct 31 00:01:54.633705 sshd-session[5066]: pam_unix(sshd:session): session closed for user core Oct 31 00:01:54.637270 systemd[1]: sshd@20-10.200.20.15:22-10.200.16.10:49638.service: Deactivated successfully. Oct 31 00:01:54.641349 systemd[1]: session-23.scope: Deactivated successfully. Oct 31 00:01:54.642447 systemd-logind[1718]: Session 23 logged out. Waiting for processes to exit. Oct 31 00:01:54.643413 systemd-logind[1718]: Removed session 23. Oct 31 00:01:54.718427 systemd[1]: Started sshd@21-10.200.20.15:22-10.200.16.10:49654.service - OpenSSH per-connection server daemon (10.200.16.10:49654). Oct 31 00:01:55.186465 sshd[5080]: Accepted publickey for core from 10.200.16.10 port 49654 ssh2: RSA SHA256:nBkLfspKkDLqOT9SkkASHbt5c8U+GcTwmvPM6OoKUzI Oct 31 00:01:55.187750 sshd-session[5080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:01:55.192250 systemd-logind[1718]: New session 24 of user core. Oct 31 00:01:55.199051 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 31 00:01:55.709330 sshd[5082]: Connection closed by 10.200.16.10 port 49654 Oct 31 00:01:55.709227 sshd-session[5080]: pam_unix(sshd:session): session closed for user core Oct 31 00:01:55.712934 systemd-logind[1718]: Session 24 logged out. Waiting for processes to exit. Oct 31 00:01:55.713497 systemd[1]: sshd@21-10.200.20.15:22-10.200.16.10:49654.service: Deactivated successfully. Oct 31 00:01:55.717014 systemd[1]: session-24.scope: Deactivated successfully. Oct 31 00:01:55.718316 systemd-logind[1718]: Removed session 24. Oct 31 00:01:55.806187 systemd[1]: Started sshd@22-10.200.20.15:22-10.200.16.10:49662.service - OpenSSH per-connection server daemon (10.200.16.10:49662). Oct 31 00:01:56.299755 sshd[5092]: Accepted publickey for core from 10.200.16.10 port 49662 ssh2: RSA SHA256:nBkLfspKkDLqOT9SkkASHbt5c8U+GcTwmvPM6OoKUzI Oct 31 00:01:56.301259 sshd-session[5092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:01:56.305629 systemd-logind[1718]: New session 25 of user core. Oct 31 00:01:56.309220 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 31 00:01:57.212248 sshd[5094]: Connection closed by 10.200.16.10 port 49662 Oct 31 00:01:57.212898 sshd-session[5092]: pam_unix(sshd:session): session closed for user core Oct 31 00:01:57.217524 systemd[1]: sshd@22-10.200.20.15:22-10.200.16.10:49662.service: Deactivated successfully. Oct 31 00:01:57.221649 systemd[1]: session-25.scope: Deactivated successfully. Oct 31 00:01:57.222712 systemd-logind[1718]: Session 25 logged out. Waiting for processes to exit. Oct 31 00:01:57.223609 systemd-logind[1718]: Removed session 25. Oct 31 00:01:57.329235 systemd[1]: Started sshd@23-10.200.20.15:22-10.200.16.10:49668.service - OpenSSH per-connection server daemon (10.200.16.10:49668). Oct 31 00:01:57.797104 sshd[5111]: Accepted publickey for core from 10.200.16.10 port 49668 ssh2: RSA SHA256:nBkLfspKkDLqOT9SkkASHbt5c8U+GcTwmvPM6OoKUzI Oct 31 00:01:57.798384 sshd-session[5111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:01:57.806298 systemd-logind[1718]: New session 26 of user core. Oct 31 00:01:57.812070 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 31 00:01:58.301413 sshd[5113]: Connection closed by 10.200.16.10 port 49668 Oct 31 00:01:58.301598 sshd-session[5111]: pam_unix(sshd:session): session closed for user core Oct 31 00:01:58.305007 systemd[1]: sshd@23-10.200.20.15:22-10.200.16.10:49668.service: Deactivated successfully. Oct 31 00:01:58.307712 systemd[1]: session-26.scope: Deactivated successfully. Oct 31 00:01:58.309297 systemd-logind[1718]: Session 26 logged out. Waiting for processes to exit. Oct 31 00:01:58.312085 systemd-logind[1718]: Removed session 26. Oct 31 00:01:58.391488 systemd[1]: Started sshd@24-10.200.20.15:22-10.200.16.10:49680.service - OpenSSH per-connection server daemon (10.200.16.10:49680). Oct 31 00:01:58.805137 sshd[5122]: Accepted publickey for core from 10.200.16.10 port 49680 ssh2: RSA SHA256:nBkLfspKkDLqOT9SkkASHbt5c8U+GcTwmvPM6OoKUzI Oct 31 00:01:58.806774 sshd-session[5122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:01:58.810744 systemd-logind[1718]: New session 27 of user core. Oct 31 00:01:58.815022 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 31 00:01:59.178347 sshd[5124]: Connection closed by 10.200.16.10 port 49680 Oct 31 00:01:59.178952 sshd-session[5122]: pam_unix(sshd:session): session closed for user core Oct 31 00:01:59.182141 systemd[1]: sshd@24-10.200.20.15:22-10.200.16.10:49680.service: Deactivated successfully. Oct 31 00:01:59.183747 systemd[1]: session-27.scope: Deactivated successfully. Oct 31 00:01:59.184529 systemd-logind[1718]: Session 27 logged out. Waiting for processes to exit. Oct 31 00:01:59.185601 systemd-logind[1718]: Removed session 27. Oct 31 00:02:04.266555 systemd[1]: Started sshd@25-10.200.20.15:22-10.200.16.10:47262.service - OpenSSH per-connection server daemon (10.200.16.10:47262). Oct 31 00:02:04.741832 sshd[5137]: Accepted publickey for core from 10.200.16.10 port 47262 ssh2: RSA SHA256:nBkLfspKkDLqOT9SkkASHbt5c8U+GcTwmvPM6OoKUzI Oct 31 00:02:04.743440 sshd-session[5137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:02:04.749836 systemd-logind[1718]: New session 28 of user core. Oct 31 00:02:04.757127 systemd[1]: Started session-28.scope - Session 28 of User core. Oct 31 00:02:05.152741 sshd[5139]: Connection closed by 10.200.16.10 port 47262 Oct 31 00:02:05.152642 sshd-session[5137]: pam_unix(sshd:session): session closed for user core Oct 31 00:02:05.156567 systemd[1]: sshd@25-10.200.20.15:22-10.200.16.10:47262.service: Deactivated successfully. Oct 31 00:02:05.158813 systemd[1]: session-28.scope: Deactivated successfully. Oct 31 00:02:05.159578 systemd-logind[1718]: Session 28 logged out. Waiting for processes to exit. Oct 31 00:02:05.160787 systemd-logind[1718]: Removed session 28. Oct 31 00:02:10.240134 systemd[1]: Started sshd@26-10.200.20.15:22-10.200.16.10:39794.service - OpenSSH per-connection server daemon (10.200.16.10:39794). Oct 31 00:02:10.706399 sshd[5153]: Accepted publickey for core from 10.200.16.10 port 39794 ssh2: RSA SHA256:nBkLfspKkDLqOT9SkkASHbt5c8U+GcTwmvPM6OoKUzI Oct 31 00:02:10.707720 sshd-session[5153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:02:10.712240 systemd-logind[1718]: New session 29 of user core. Oct 31 00:02:10.714051 systemd[1]: Started session-29.scope - Session 29 of User core. Oct 31 00:02:11.107203 sshd[5155]: Connection closed by 10.200.16.10 port 39794 Oct 31 00:02:11.107932 sshd-session[5153]: pam_unix(sshd:session): session closed for user core Oct 31 00:02:11.111528 systemd[1]: sshd@26-10.200.20.15:22-10.200.16.10:39794.service: Deactivated successfully. Oct 31 00:02:11.115548 systemd[1]: session-29.scope: Deactivated successfully. Oct 31 00:02:11.116726 systemd-logind[1718]: Session 29 logged out. Waiting for processes to exit. Oct 31 00:02:11.117617 systemd-logind[1718]: Removed session 29. Oct 31 00:02:16.190325 systemd[1]: Started sshd@27-10.200.20.15:22-10.200.16.10:39808.service - OpenSSH per-connection server daemon (10.200.16.10:39808). Oct 31 00:02:16.608668 sshd[5167]: Accepted publickey for core from 10.200.16.10 port 39808 ssh2: RSA SHA256:nBkLfspKkDLqOT9SkkASHbt5c8U+GcTwmvPM6OoKUzI Oct 31 00:02:16.609867 sshd-session[5167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:02:16.613961 systemd-logind[1718]: New session 30 of user core. Oct 31 00:02:16.622048 systemd[1]: Started session-30.scope - Session 30 of User core. Oct 31 00:02:16.973936 sshd[5169]: Connection closed by 10.200.16.10 port 39808 Oct 31 00:02:16.974565 sshd-session[5167]: pam_unix(sshd:session): session closed for user core Oct 31 00:02:16.978620 systemd-logind[1718]: Session 30 logged out. Waiting for processes to exit. Oct 31 00:02:16.979267 systemd[1]: sshd@27-10.200.20.15:22-10.200.16.10:39808.service: Deactivated successfully. Oct 31 00:02:16.982612 systemd[1]: session-30.scope: Deactivated successfully. Oct 31 00:02:16.983757 systemd-logind[1718]: Removed session 30. Oct 31 00:02:22.064635 systemd[1]: Started sshd@28-10.200.20.15:22-10.200.16.10:54900.service - OpenSSH per-connection server daemon (10.200.16.10:54900). Oct 31 00:02:22.522943 sshd[5182]: Accepted publickey for core from 10.200.16.10 port 54900 ssh2: RSA SHA256:nBkLfspKkDLqOT9SkkASHbt5c8U+GcTwmvPM6OoKUzI Oct 31 00:02:22.524284 sshd-session[5182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:02:22.528587 systemd-logind[1718]: New session 31 of user core. Oct 31 00:02:22.540035 systemd[1]: Started session-31.scope - Session 31 of User core. Oct 31 00:02:22.911271 sshd[5184]: Connection closed by 10.200.16.10 port 54900 Oct 31 00:02:22.911977 sshd-session[5182]: pam_unix(sshd:session): session closed for user core Oct 31 00:02:22.915781 systemd-logind[1718]: Session 31 logged out. Waiting for processes to exit. Oct 31 00:02:22.916006 systemd[1]: sshd@28-10.200.20.15:22-10.200.16.10:54900.service: Deactivated successfully. Oct 31 00:02:22.918757 systemd[1]: session-31.scope: Deactivated successfully. Oct 31 00:02:22.920281 systemd-logind[1718]: Removed session 31. Oct 31 00:02:28.001235 systemd[1]: Started sshd@29-10.200.20.15:22-10.200.16.10:54908.service - OpenSSH per-connection server daemon (10.200.16.10:54908). Oct 31 00:02:28.465070 sshd[5196]: Accepted publickey for core from 10.200.16.10 port 54908 ssh2: RSA SHA256:nBkLfspKkDLqOT9SkkASHbt5c8U+GcTwmvPM6OoKUzI Oct 31 00:02:28.466491 sshd-session[5196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:02:28.470416 systemd-logind[1718]: New session 32 of user core. Oct 31 00:02:28.472090 systemd[1]: Started session-32.scope - Session 32 of User core. Oct 31 00:02:28.855353 sshd[5198]: Connection closed by 10.200.16.10 port 54908 Oct 31 00:02:28.855054 sshd-session[5196]: pam_unix(sshd:session): session closed for user core Oct 31 00:02:28.859768 systemd[1]: sshd@29-10.200.20.15:22-10.200.16.10:54908.service: Deactivated successfully. Oct 31 00:02:28.863564 systemd[1]: session-32.scope: Deactivated successfully. Oct 31 00:02:28.864354 systemd-logind[1718]: Session 32 logged out. Waiting for processes to exit. Oct 31 00:02:28.865528 systemd-logind[1718]: Removed session 32. Oct 31 00:02:33.935198 systemd[1]: Started sshd@30-10.200.20.15:22-10.200.16.10:58554.service - OpenSSH per-connection server daemon (10.200.16.10:58554). Oct 31 00:02:34.355210 sshd[5209]: Accepted publickey for core from 10.200.16.10 port 58554 ssh2: RSA SHA256:nBkLfspKkDLqOT9SkkASHbt5c8U+GcTwmvPM6OoKUzI Oct 31 00:02:34.356640 sshd-session[5209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:02:34.362679 systemd-logind[1718]: New session 33 of user core. Oct 31 00:02:34.367071 systemd[1]: Started session-33.scope - Session 33 of User core. Oct 31 00:02:34.730293 sshd[5211]: Connection closed by 10.200.16.10 port 58554 Oct 31 00:02:34.730910 sshd-session[5209]: pam_unix(sshd:session): session closed for user core Oct 31 00:02:34.734739 systemd[1]: sshd@30-10.200.20.15:22-10.200.16.10:58554.service: Deactivated successfully. Oct 31 00:02:34.736471 systemd[1]: session-33.scope: Deactivated successfully. Oct 31 00:02:34.738440 systemd-logind[1718]: Session 33 logged out. Waiting for processes to exit. Oct 31 00:02:34.739554 systemd-logind[1718]: Removed session 33. Oct 31 00:02:34.814189 systemd[1]: Started sshd@31-10.200.20.15:22-10.200.16.10:58556.service - OpenSSH per-connection server daemon (10.200.16.10:58556). Oct 31 00:02:35.231024 sshd[5222]: Accepted publickey for core from 10.200.16.10 port 58556 ssh2: RSA SHA256:nBkLfspKkDLqOT9SkkASHbt5c8U+GcTwmvPM6OoKUzI Oct 31 00:02:35.232876 sshd-session[5222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:02:35.238911 systemd-logind[1718]: New session 34 of user core. Oct 31 00:02:35.243057 systemd[1]: Started session-34.scope - Session 34 of User core. Oct 31 00:02:37.899480 kubelet[3457]: I1031 00:02:37.898756 3457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2p9mg" podStartSLOduration=271.898725974 podStartE2EDuration="4m31.898725974s" podCreationTimestamp="2025-10-30 23:58:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 23:58:54.631539103 +0000 UTC m=+54.448435821" watchObservedRunningTime="2025-10-31 00:02:37.898725974 +0000 UTC m=+277.715622772" Oct 31 00:02:37.925471 containerd[1743]: time="2025-10-31T00:02:37.925399205Z" level=info msg="StopContainer for \"ccfe71fa37cc570992a1875224b091375a829f02414af00a2e6f529d3b8a89d9\" with timeout 30 (s)" Oct 31 00:02:37.926585 containerd[1743]: time="2025-10-31T00:02:37.926555246Z" level=info msg="Stop container \"ccfe71fa37cc570992a1875224b091375a829f02414af00a2e6f529d3b8a89d9\" with signal terminated" Oct 31 00:02:37.940227 containerd[1743]: time="2025-10-31T00:02:37.940176582Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 31 00:02:37.949711 containerd[1743]: time="2025-10-31T00:02:37.948868112Z" level=info msg="StopContainer for \"2d341e62615987e35eb637b4a126c1cf410d7d836c844e7ddaf98b60fcff5d41\" with timeout 2 (s)" Oct 31 00:02:37.950155 systemd[1]: cri-containerd-ccfe71fa37cc570992a1875224b091375a829f02414af00a2e6f529d3b8a89d9.scope: Deactivated successfully. Oct 31 00:02:37.953628 containerd[1743]: time="2025-10-31T00:02:37.953152957Z" level=info msg="Stop container \"2d341e62615987e35eb637b4a126c1cf410d7d836c844e7ddaf98b60fcff5d41\" with signal terminated" Oct 31 00:02:37.963619 systemd-networkd[1341]: lxc_health: Link DOWN Oct 31 00:02:37.963626 systemd-networkd[1341]: lxc_health: Lost carrier Oct 31 00:02:37.982871 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ccfe71fa37cc570992a1875224b091375a829f02414af00a2e6f529d3b8a89d9-rootfs.mount: Deactivated successfully. Oct 31 00:02:37.988686 systemd[1]: cri-containerd-2d341e62615987e35eb637b4a126c1cf410d7d836c844e7ddaf98b60fcff5d41.scope: Deactivated successfully. Oct 31 00:02:37.989001 systemd[1]: cri-containerd-2d341e62615987e35eb637b4a126c1cf410d7d836c844e7ddaf98b60fcff5d41.scope: Consumed 6.400s CPU time, 123.4M memory peak, 136K read from disk, 12.9M written to disk. Oct 31 00:02:38.018179 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d341e62615987e35eb637b4a126c1cf410d7d836c844e7ddaf98b60fcff5d41-rootfs.mount: Deactivated successfully. Oct 31 00:02:38.079123 containerd[1743]: time="2025-10-31T00:02:38.078681344Z" level=info msg="shim disconnected" id=2d341e62615987e35eb637b4a126c1cf410d7d836c844e7ddaf98b60fcff5d41 namespace=k8s.io Oct 31 00:02:38.079618 containerd[1743]: time="2025-10-31T00:02:38.079516945Z" level=warning msg="cleaning up after shim disconnected" id=2d341e62615987e35eb637b4a126c1cf410d7d836c844e7ddaf98b60fcff5d41 namespace=k8s.io Oct 31 00:02:38.079618 containerd[1743]: time="2025-10-31T00:02:38.079542745Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 00:02:38.079862 containerd[1743]: time="2025-10-31T00:02:38.079062264Z" level=info msg="shim disconnected" id=ccfe71fa37cc570992a1875224b091375a829f02414af00a2e6f529d3b8a89d9 namespace=k8s.io Oct 31 00:02:38.079862 containerd[1743]: time="2025-10-31T00:02:38.079686025Z" level=warning msg="cleaning up after shim disconnected" id=ccfe71fa37cc570992a1875224b091375a829f02414af00a2e6f529d3b8a89d9 namespace=k8s.io Oct 31 00:02:38.079862 containerd[1743]: time="2025-10-31T00:02:38.079693025Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 00:02:38.106523 containerd[1743]: time="2025-10-31T00:02:38.106472416Z" level=info msg="StopContainer for \"2d341e62615987e35eb637b4a126c1cf410d7d836c844e7ddaf98b60fcff5d41\" returns successfully" Oct 31 00:02:38.107142 containerd[1743]: time="2025-10-31T00:02:38.107108497Z" level=info msg="StopPodSandbox for \"97a31dc29111cdcd1a83e18e2a2c1a8f329779bc876aa17f67ede256eaf5e578\"" Oct 31 00:02:38.107232 containerd[1743]: time="2025-10-31T00:02:38.107154297Z" level=info msg="Container to stop \"dc0e20d4f61e4a811308f3edfed0b1ccfa2ab37ca307aecd08b605d4dcb53e49\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 31 00:02:38.107232 containerd[1743]: time="2025-10-31T00:02:38.107167737Z" level=info msg="Container to stop \"9165ba30ba874a4970e6c9358991e42f54b76a004ca89455ccebd2fd237a218e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 31 00:02:38.107232 containerd[1743]: time="2025-10-31T00:02:38.107179977Z" level=info msg="Container to stop \"eab839ee3bd9eaff52bc88e81db3ea9c9ecf710e921ab9845c841fe46cf52810\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 31 00:02:38.107232 containerd[1743]: time="2025-10-31T00:02:38.107190497Z" level=info msg="Container to stop \"4fea37136f4a87c8777851fb1c750d46c234c0d698f68332b4e03c4f5e4db20b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 31 00:02:38.107232 containerd[1743]: time="2025-10-31T00:02:38.107198977Z" level=info msg="Container to stop \"2d341e62615987e35eb637b4a126c1cf410d7d836c844e7ddaf98b60fcff5d41\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 31 00:02:38.109217 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-97a31dc29111cdcd1a83e18e2a2c1a8f329779bc876aa17f67ede256eaf5e578-shm.mount: Deactivated successfully. Oct 31 00:02:38.112787 containerd[1743]: time="2025-10-31T00:02:38.112420743Z" level=info msg="StopContainer for \"ccfe71fa37cc570992a1875224b091375a829f02414af00a2e6f529d3b8a89d9\" returns successfully" Oct 31 00:02:38.113422 containerd[1743]: time="2025-10-31T00:02:38.113272024Z" level=info msg="StopPodSandbox for \"a776050b7b93ab68a7d0a35c103485dbe04650aee84ec3c82c7ee147544f8a32\"" Oct 31 00:02:38.113480 containerd[1743]: time="2025-10-31T00:02:38.113423904Z" level=info msg="Container to stop \"ccfe71fa37cc570992a1875224b091375a829f02414af00a2e6f529d3b8a89d9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 31 00:02:38.116952 systemd[1]: cri-containerd-97a31dc29111cdcd1a83e18e2a2c1a8f329779bc876aa17f67ede256eaf5e578.scope: Deactivated successfully. Oct 31 00:02:38.127720 systemd[1]: cri-containerd-a776050b7b93ab68a7d0a35c103485dbe04650aee84ec3c82c7ee147544f8a32.scope: Deactivated successfully. Oct 31 00:02:38.178702 containerd[1743]: time="2025-10-31T00:02:38.178543700Z" level=info msg="shim disconnected" id=97a31dc29111cdcd1a83e18e2a2c1a8f329779bc876aa17f67ede256eaf5e578 namespace=k8s.io Oct 31 00:02:38.178702 containerd[1743]: time="2025-10-31T00:02:38.178598021Z" level=warning msg="cleaning up after shim disconnected" id=97a31dc29111cdcd1a83e18e2a2c1a8f329779bc876aa17f67ede256eaf5e578 namespace=k8s.io Oct 31 00:02:38.178702 containerd[1743]: time="2025-10-31T00:02:38.178606141Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 00:02:38.180742 containerd[1743]: time="2025-10-31T00:02:38.180387503Z" level=info msg="shim disconnected" id=a776050b7b93ab68a7d0a35c103485dbe04650aee84ec3c82c7ee147544f8a32 namespace=k8s.io Oct 31 00:02:38.180742 containerd[1743]: time="2025-10-31T00:02:38.180435743Z" level=warning msg="cleaning up after shim disconnected" id=a776050b7b93ab68a7d0a35c103485dbe04650aee84ec3c82c7ee147544f8a32 namespace=k8s.io Oct 31 00:02:38.180742 containerd[1743]: time="2025-10-31T00:02:38.180445623Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 00:02:38.193174 containerd[1743]: time="2025-10-31T00:02:38.193064437Z" level=info msg="TearDown network for sandbox \"97a31dc29111cdcd1a83e18e2a2c1a8f329779bc876aa17f67ede256eaf5e578\" successfully" Oct 31 00:02:38.193174 containerd[1743]: time="2025-10-31T00:02:38.193108397Z" level=info msg="StopPodSandbox for \"97a31dc29111cdcd1a83e18e2a2c1a8f329779bc876aa17f67ede256eaf5e578\" returns successfully" Oct 31 00:02:38.200376 containerd[1743]: time="2025-10-31T00:02:38.200221486Z" level=info msg="TearDown network for sandbox \"a776050b7b93ab68a7d0a35c103485dbe04650aee84ec3c82c7ee147544f8a32\" successfully" Oct 31 00:02:38.200376 containerd[1743]: time="2025-10-31T00:02:38.200252406Z" level=info msg="StopPodSandbox for \"a776050b7b93ab68a7d0a35c103485dbe04650aee84ec3c82c7ee147544f8a32\" returns successfully" Oct 31 00:02:38.286942 kubelet[3457]: I1031 00:02:38.286265 3457 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cf34bf6e-2f6c-4fcf-863d-7d5010123939-hostproc\") pod \"cf34bf6e-2f6c-4fcf-863d-7d5010123939\" (UID: \"cf34bf6e-2f6c-4fcf-863d-7d5010123939\") " Oct 31 00:02:38.286942 kubelet[3457]: I1031 00:02:38.286311 3457 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cf34bf6e-2f6c-4fcf-863d-7d5010123939-host-proc-sys-kernel\") pod \"cf34bf6e-2f6c-4fcf-863d-7d5010123939\" (UID: \"cf34bf6e-2f6c-4fcf-863d-7d5010123939\") " Oct 31 00:02:38.286942 kubelet[3457]: I1031 00:02:38.286330 3457 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cf34bf6e-2f6c-4fcf-863d-7d5010123939-bpf-maps\") pod \"cf34bf6e-2f6c-4fcf-863d-7d5010123939\" (UID: \"cf34bf6e-2f6c-4fcf-863d-7d5010123939\") " Oct 31 00:02:38.286942 kubelet[3457]: I1031 00:02:38.286346 3457 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cf34bf6e-2f6c-4fcf-863d-7d5010123939-host-proc-sys-net\") pod \"cf34bf6e-2f6c-4fcf-863d-7d5010123939\" (UID: \"cf34bf6e-2f6c-4fcf-863d-7d5010123939\") " Oct 31 00:02:38.286942 kubelet[3457]: I1031 00:02:38.286361 3457 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf34bf6e-2f6c-4fcf-863d-7d5010123939-lib-modules\") pod \"cf34bf6e-2f6c-4fcf-863d-7d5010123939\" (UID: \"cf34bf6e-2f6c-4fcf-863d-7d5010123939\") " Oct 31 00:02:38.286942 kubelet[3457]: I1031 00:02:38.286378 3457 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf34bf6e-2f6c-4fcf-863d-7d5010123939-xtables-lock\") pod \"cf34bf6e-2f6c-4fcf-863d-7d5010123939\" (UID: \"cf34bf6e-2f6c-4fcf-863d-7d5010123939\") " Oct 31 00:02:38.287216 kubelet[3457]: I1031 00:02:38.286392 3457 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cf34bf6e-2f6c-4fcf-863d-7d5010123939-cni-path\") pod \"cf34bf6e-2f6c-4fcf-863d-7d5010123939\" (UID: \"cf34bf6e-2f6c-4fcf-863d-7d5010123939\") " Oct 31 00:02:38.287216 kubelet[3457]: I1031 00:02:38.286392 3457 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf34bf6e-2f6c-4fcf-863d-7d5010123939-hostproc" (OuterVolumeSpecName: "hostproc") pod "cf34bf6e-2f6c-4fcf-863d-7d5010123939" (UID: "cf34bf6e-2f6c-4fcf-863d-7d5010123939"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:02:38.287216 kubelet[3457]: I1031 00:02:38.286416 3457 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgn44\" (UniqueName: \"kubernetes.io/projected/c6275dbb-2c27-4e0f-baea-89ef852cccb7-kube-api-access-cgn44\") pod \"c6275dbb-2c27-4e0f-baea-89ef852cccb7\" (UID: \"c6275dbb-2c27-4e0f-baea-89ef852cccb7\") " Oct 31 00:02:38.287216 kubelet[3457]: I1031 00:02:38.286439 3457 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cf34bf6e-2f6c-4fcf-863d-7d5010123939-clustermesh-secrets\") pod \"cf34bf6e-2f6c-4fcf-863d-7d5010123939\" (UID: \"cf34bf6e-2f6c-4fcf-863d-7d5010123939\") " Oct 31 00:02:38.287216 kubelet[3457]: I1031 00:02:38.286454 3457 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf34bf6e-2f6c-4fcf-863d-7d5010123939-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "cf34bf6e-2f6c-4fcf-863d-7d5010123939" (UID: "cf34bf6e-2f6c-4fcf-863d-7d5010123939"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:02:38.287320 kubelet[3457]: I1031 00:02:38.286459 3457 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cf34bf6e-2f6c-4fcf-863d-7d5010123939-cilium-config-path\") pod \"cf34bf6e-2f6c-4fcf-863d-7d5010123939\" (UID: \"cf34bf6e-2f6c-4fcf-863d-7d5010123939\") " Oct 31 00:02:38.287320 kubelet[3457]: I1031 00:02:38.286497 3457 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cf34bf6e-2f6c-4fcf-863d-7d5010123939-cilium-cgroup\") pod \"cf34bf6e-2f6c-4fcf-863d-7d5010123939\" (UID: \"cf34bf6e-2f6c-4fcf-863d-7d5010123939\") " Oct 31 00:02:38.287320 kubelet[3457]: I1031 00:02:38.286516 3457 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-252cw\" (UniqueName: \"kubernetes.io/projected/cf34bf6e-2f6c-4fcf-863d-7d5010123939-kube-api-access-252cw\") pod \"cf34bf6e-2f6c-4fcf-863d-7d5010123939\" (UID: \"cf34bf6e-2f6c-4fcf-863d-7d5010123939\") " Oct 31 00:02:38.287320 kubelet[3457]: I1031 00:02:38.286533 3457 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cf34bf6e-2f6c-4fcf-863d-7d5010123939-cilium-run\") pod \"cf34bf6e-2f6c-4fcf-863d-7d5010123939\" (UID: \"cf34bf6e-2f6c-4fcf-863d-7d5010123939\") " Oct 31 00:02:38.287320 kubelet[3457]: I1031 00:02:38.286551 3457 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cf34bf6e-2f6c-4fcf-863d-7d5010123939-hubble-tls\") pod \"cf34bf6e-2f6c-4fcf-863d-7d5010123939\" (UID: \"cf34bf6e-2f6c-4fcf-863d-7d5010123939\") " Oct 31 00:02:38.287320 kubelet[3457]: I1031 00:02:38.286567 3457 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6275dbb-2c27-4e0f-baea-89ef852cccb7-cilium-config-path\") pod \"c6275dbb-2c27-4e0f-baea-89ef852cccb7\" (UID: \"c6275dbb-2c27-4e0f-baea-89ef852cccb7\") " Oct 31 00:02:38.287442 kubelet[3457]: I1031 00:02:38.286584 3457 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cf34bf6e-2f6c-4fcf-863d-7d5010123939-etc-cni-netd\") pod \"cf34bf6e-2f6c-4fcf-863d-7d5010123939\" (UID: \"cf34bf6e-2f6c-4fcf-863d-7d5010123939\") " Oct 31 00:02:38.287442 kubelet[3457]: I1031 00:02:38.286618 3457 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cf34bf6e-2f6c-4fcf-863d-7d5010123939-hostproc\") on node \"ci-4230.2.4-n-0164ad71e3\" DevicePath \"\"" Oct 31 00:02:38.287442 kubelet[3457]: I1031 00:02:38.286628 3457 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf34bf6e-2f6c-4fcf-863d-7d5010123939-lib-modules\") on node \"ci-4230.2.4-n-0164ad71e3\" DevicePath \"\"" Oct 31 00:02:38.287442 kubelet[3457]: I1031 00:02:38.286648 3457 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf34bf6e-2f6c-4fcf-863d-7d5010123939-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "cf34bf6e-2f6c-4fcf-863d-7d5010123939" (UID: "cf34bf6e-2f6c-4fcf-863d-7d5010123939"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:02:38.287442 kubelet[3457]: I1031 00:02:38.286662 3457 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf34bf6e-2f6c-4fcf-863d-7d5010123939-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "cf34bf6e-2f6c-4fcf-863d-7d5010123939" (UID: "cf34bf6e-2f6c-4fcf-863d-7d5010123939"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:02:38.287442 kubelet[3457]: I1031 00:02:38.286676 3457 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf34bf6e-2f6c-4fcf-863d-7d5010123939-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "cf34bf6e-2f6c-4fcf-863d-7d5010123939" (UID: "cf34bf6e-2f6c-4fcf-863d-7d5010123939"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:02:38.287561 kubelet[3457]: I1031 00:02:38.286698 3457 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf34bf6e-2f6c-4fcf-863d-7d5010123939-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "cf34bf6e-2f6c-4fcf-863d-7d5010123939" (UID: "cf34bf6e-2f6c-4fcf-863d-7d5010123939"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:02:38.287561 kubelet[3457]: I1031 00:02:38.286710 3457 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf34bf6e-2f6c-4fcf-863d-7d5010123939-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "cf34bf6e-2f6c-4fcf-863d-7d5010123939" (UID: "cf34bf6e-2f6c-4fcf-863d-7d5010123939"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:02:38.288893 kubelet[3457]: I1031 00:02:38.288709 3457 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf34bf6e-2f6c-4fcf-863d-7d5010123939-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "cf34bf6e-2f6c-4fcf-863d-7d5010123939" (UID: "cf34bf6e-2f6c-4fcf-863d-7d5010123939"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:02:38.289588 kubelet[3457]: I1031 00:02:38.289461 3457 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf34bf6e-2f6c-4fcf-863d-7d5010123939-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cf34bf6e-2f6c-4fcf-863d-7d5010123939" (UID: "cf34bf6e-2f6c-4fcf-863d-7d5010123939"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 31 00:02:38.289588 kubelet[3457]: I1031 00:02:38.289529 3457 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf34bf6e-2f6c-4fcf-863d-7d5010123939-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "cf34bf6e-2f6c-4fcf-863d-7d5010123939" (UID: "cf34bf6e-2f6c-4fcf-863d-7d5010123939"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:02:38.289588 kubelet[3457]: I1031 00:02:38.289551 3457 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf34bf6e-2f6c-4fcf-863d-7d5010123939-cni-path" (OuterVolumeSpecName: "cni-path") pod "cf34bf6e-2f6c-4fcf-863d-7d5010123939" (UID: "cf34bf6e-2f6c-4fcf-863d-7d5010123939"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:02:38.292509 kubelet[3457]: I1031 00:02:38.292457 3457 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf34bf6e-2f6c-4fcf-863d-7d5010123939-kube-api-access-252cw" (OuterVolumeSpecName: "kube-api-access-252cw") pod "cf34bf6e-2f6c-4fcf-863d-7d5010123939" (UID: "cf34bf6e-2f6c-4fcf-863d-7d5010123939"). InnerVolumeSpecName "kube-api-access-252cw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 31 00:02:38.293425 kubelet[3457]: I1031 00:02:38.293362 3457 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6275dbb-2c27-4e0f-baea-89ef852cccb7-kube-api-access-cgn44" (OuterVolumeSpecName: "kube-api-access-cgn44") pod "c6275dbb-2c27-4e0f-baea-89ef852cccb7" (UID: "c6275dbb-2c27-4e0f-baea-89ef852cccb7"). InnerVolumeSpecName "kube-api-access-cgn44". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 31 00:02:38.294338 kubelet[3457]: I1031 00:02:38.294268 3457 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf34bf6e-2f6c-4fcf-863d-7d5010123939-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "cf34bf6e-2f6c-4fcf-863d-7d5010123939" (UID: "cf34bf6e-2f6c-4fcf-863d-7d5010123939"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 31 00:02:38.294541 kubelet[3457]: I1031 00:02:38.294429 3457 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf34bf6e-2f6c-4fcf-863d-7d5010123939-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "cf34bf6e-2f6c-4fcf-863d-7d5010123939" (UID: "cf34bf6e-2f6c-4fcf-863d-7d5010123939"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 31 00:02:38.294571 kubelet[3457]: I1031 00:02:38.294551 3457 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6275dbb-2c27-4e0f-baea-89ef852cccb7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c6275dbb-2c27-4e0f-baea-89ef852cccb7" (UID: "c6275dbb-2c27-4e0f-baea-89ef852cccb7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 31 00:02:38.381081 systemd[1]: Removed slice kubepods-besteffort-podc6275dbb_2c27_4e0f_baea_89ef852cccb7.slice - libcontainer container kubepods-besteffort-podc6275dbb_2c27_4e0f_baea_89ef852cccb7.slice. Oct 31 00:02:38.382443 systemd[1]: Removed slice kubepods-burstable-podcf34bf6e_2f6c_4fcf_863d_7d5010123939.slice - libcontainer container kubepods-burstable-podcf34bf6e_2f6c_4fcf_863d_7d5010123939.slice. Oct 31 00:02:38.382547 systemd[1]: kubepods-burstable-podcf34bf6e_2f6c_4fcf_863d_7d5010123939.slice: Consumed 6.470s CPU time, 123.8M memory peak, 136K read from disk, 12.9M written to disk. Oct 31 00:02:38.387699 kubelet[3457]: I1031 00:02:38.387664 3457 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cf34bf6e-2f6c-4fcf-863d-7d5010123939-bpf-maps\") on node \"ci-4230.2.4-n-0164ad71e3\" DevicePath \"\"" Oct 31 00:02:38.387699 kubelet[3457]: I1031 00:02:38.387697 3457 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cf34bf6e-2f6c-4fcf-863d-7d5010123939-cni-path\") on node \"ci-4230.2.4-n-0164ad71e3\" DevicePath \"\"" Oct 31 00:02:38.387699 kubelet[3457]: I1031 00:02:38.387708 3457 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cf34bf6e-2f6c-4fcf-863d-7d5010123939-host-proc-sys-net\") on node \"ci-4230.2.4-n-0164ad71e3\" DevicePath \"\"" Oct 31 00:02:38.387864 kubelet[3457]: I1031 00:02:38.387719 3457 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf34bf6e-2f6c-4fcf-863d-7d5010123939-xtables-lock\") on node \"ci-4230.2.4-n-0164ad71e3\" DevicePath \"\"" Oct 31 00:02:38.387864 kubelet[3457]: I1031 00:02:38.387729 3457 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cgn44\" (UniqueName: \"kubernetes.io/projected/c6275dbb-2c27-4e0f-baea-89ef852cccb7-kube-api-access-cgn44\") on node \"ci-4230.2.4-n-0164ad71e3\" DevicePath \"\"" Oct 31 00:02:38.387864 kubelet[3457]: I1031 00:02:38.387738 3457 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cf34bf6e-2f6c-4fcf-863d-7d5010123939-cilium-cgroup\") on node \"ci-4230.2.4-n-0164ad71e3\" DevicePath \"\"" Oct 31 00:02:38.387864 kubelet[3457]: I1031 00:02:38.387747 3457 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cf34bf6e-2f6c-4fcf-863d-7d5010123939-clustermesh-secrets\") on node \"ci-4230.2.4-n-0164ad71e3\" DevicePath \"\"" Oct 31 00:02:38.387864 kubelet[3457]: I1031 00:02:38.387755 3457 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cf34bf6e-2f6c-4fcf-863d-7d5010123939-cilium-config-path\") on node \"ci-4230.2.4-n-0164ad71e3\" DevicePath \"\"" Oct 31 00:02:38.387864 kubelet[3457]: I1031 00:02:38.387763 3457 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-252cw\" (UniqueName: \"kubernetes.io/projected/cf34bf6e-2f6c-4fcf-863d-7d5010123939-kube-api-access-252cw\") on node \"ci-4230.2.4-n-0164ad71e3\" DevicePath \"\"" Oct 31 00:02:38.387864 kubelet[3457]: I1031 00:02:38.387772 3457 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cf34bf6e-2f6c-4fcf-863d-7d5010123939-cilium-run\") on node \"ci-4230.2.4-n-0164ad71e3\" DevicePath \"\"" Oct 31 00:02:38.387864 kubelet[3457]: I1031 00:02:38.387782 3457 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cf34bf6e-2f6c-4fcf-863d-7d5010123939-hubble-tls\") on node \"ci-4230.2.4-n-0164ad71e3\" DevicePath \"\"" Oct 31 00:02:38.388044 kubelet[3457]: I1031 00:02:38.387791 3457 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cf34bf6e-2f6c-4fcf-863d-7d5010123939-etc-cni-netd\") on node \"ci-4230.2.4-n-0164ad71e3\" DevicePath \"\"" Oct 31 00:02:38.388044 kubelet[3457]: I1031 00:02:38.387798 3457 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6275dbb-2c27-4e0f-baea-89ef852cccb7-cilium-config-path\") on node \"ci-4230.2.4-n-0164ad71e3\" DevicePath \"\"" Oct 31 00:02:38.388044 kubelet[3457]: I1031 00:02:38.387808 3457 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cf34bf6e-2f6c-4fcf-863d-7d5010123939-host-proc-sys-kernel\") on node \"ci-4230.2.4-n-0164ad71e3\" DevicePath \"\"" Oct 31 00:02:38.826711 kubelet[3457]: E1031 00:02:38.826633 3457 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 31 00:02:38.912938 kubelet[3457]: I1031 00:02:38.912903 3457 scope.go:117] "RemoveContainer" containerID="ccfe71fa37cc570992a1875224b091375a829f02414af00a2e6f529d3b8a89d9" Oct 31 00:02:38.918044 containerd[1743]: time="2025-10-31T00:02:38.917699084Z" level=info msg="RemoveContainer for \"ccfe71fa37cc570992a1875224b091375a829f02414af00a2e6f529d3b8a89d9\"" Oct 31 00:02:38.920411 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a776050b7b93ab68a7d0a35c103485dbe04650aee84ec3c82c7ee147544f8a32-rootfs.mount: Deactivated successfully. Oct 31 00:02:38.920515 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a776050b7b93ab68a7d0a35c103485dbe04650aee84ec3c82c7ee147544f8a32-shm.mount: Deactivated successfully. Oct 31 00:02:38.920571 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97a31dc29111cdcd1a83e18e2a2c1a8f329779bc876aa17f67ede256eaf5e578-rootfs.mount: Deactivated successfully. Oct 31 00:02:38.920620 systemd[1]: var-lib-kubelet-pods-cf34bf6e\x2d2f6c\x2d4fcf\x2d863d\x2d7d5010123939-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d252cw.mount: Deactivated successfully. Oct 31 00:02:38.920671 systemd[1]: var-lib-kubelet-pods-cf34bf6e\x2d2f6c\x2d4fcf\x2d863d\x2d7d5010123939-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 31 00:02:38.920719 systemd[1]: var-lib-kubelet-pods-c6275dbb\x2d2c27\x2d4e0f\x2dbaea\x2d89ef852cccb7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcgn44.mount: Deactivated successfully. Oct 31 00:02:38.920769 systemd[1]: var-lib-kubelet-pods-cf34bf6e\x2d2f6c\x2d4fcf\x2d863d\x2d7d5010123939-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 31 00:02:38.938746 containerd[1743]: time="2025-10-31T00:02:38.938391708Z" level=info msg="RemoveContainer for \"ccfe71fa37cc570992a1875224b091375a829f02414af00a2e6f529d3b8a89d9\" returns successfully" Oct 31 00:02:38.939916 kubelet[3457]: I1031 00:02:38.939872 3457 scope.go:117] "RemoveContainer" containerID="ccfe71fa37cc570992a1875224b091375a829f02414af00a2e6f529d3b8a89d9" Oct 31 00:02:38.940241 containerd[1743]: time="2025-10-31T00:02:38.940163990Z" level=error msg="ContainerStatus for \"ccfe71fa37cc570992a1875224b091375a829f02414af00a2e6f529d3b8a89d9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ccfe71fa37cc570992a1875224b091375a829f02414af00a2e6f529d3b8a89d9\": not found" Oct 31 00:02:38.940386 kubelet[3457]: E1031 00:02:38.940324 3457 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ccfe71fa37cc570992a1875224b091375a829f02414af00a2e6f529d3b8a89d9\": not found" containerID="ccfe71fa37cc570992a1875224b091375a829f02414af00a2e6f529d3b8a89d9" Oct 31 00:02:38.940457 kubelet[3457]: I1031 00:02:38.940357 3457 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ccfe71fa37cc570992a1875224b091375a829f02414af00a2e6f529d3b8a89d9"} err="failed to get container status \"ccfe71fa37cc570992a1875224b091375a829f02414af00a2e6f529d3b8a89d9\": rpc error: code = NotFound desc = an error occurred when try to find container \"ccfe71fa37cc570992a1875224b091375a829f02414af00a2e6f529d3b8a89d9\": not found" Oct 31 00:02:38.940457 kubelet[3457]: I1031 00:02:38.940448 3457 scope.go:117] "RemoveContainer" containerID="2d341e62615987e35eb637b4a126c1cf410d7d836c844e7ddaf98b60fcff5d41" Oct 31 00:02:38.943288 containerd[1743]: time="2025-10-31T00:02:38.942942033Z" level=info msg="RemoveContainer for \"2d341e62615987e35eb637b4a126c1cf410d7d836c844e7ddaf98b60fcff5d41\"" Oct 31 00:02:38.962747 containerd[1743]: time="2025-10-31T00:02:38.962651976Z" level=info msg="RemoveContainer for \"2d341e62615987e35eb637b4a126c1cf410d7d836c844e7ddaf98b60fcff5d41\" returns successfully" Oct 31 00:02:38.963267 kubelet[3457]: I1031 00:02:38.963163 3457 scope.go:117] "RemoveContainer" containerID="eab839ee3bd9eaff52bc88e81db3ea9c9ecf710e921ab9845c841fe46cf52810" Oct 31 00:02:38.964580 containerd[1743]: time="2025-10-31T00:02:38.964544098Z" level=info msg="RemoveContainer for \"eab839ee3bd9eaff52bc88e81db3ea9c9ecf710e921ab9845c841fe46cf52810\"" Oct 31 00:02:38.977069 containerd[1743]: time="2025-10-31T00:02:38.977025193Z" level=info msg="RemoveContainer for \"eab839ee3bd9eaff52bc88e81db3ea9c9ecf710e921ab9845c841fe46cf52810\" returns successfully" Oct 31 00:02:38.979440 kubelet[3457]: I1031 00:02:38.979060 3457 scope.go:117] "RemoveContainer" containerID="9165ba30ba874a4970e6c9358991e42f54b76a004ca89455ccebd2fd237a218e" Oct 31 00:02:38.980581 containerd[1743]: time="2025-10-31T00:02:38.980299237Z" level=info msg="RemoveContainer for \"9165ba30ba874a4970e6c9358991e42f54b76a004ca89455ccebd2fd237a218e\"" Oct 31 00:02:38.993164 containerd[1743]: time="2025-10-31T00:02:38.993044611Z" level=info msg="RemoveContainer for \"9165ba30ba874a4970e6c9358991e42f54b76a004ca89455ccebd2fd237a218e\" returns successfully" Oct 31 00:02:38.993321 kubelet[3457]: I1031 00:02:38.993289 3457 scope.go:117] "RemoveContainer" containerID="4fea37136f4a87c8777851fb1c750d46c234c0d698f68332b4e03c4f5e4db20b" Oct 31 00:02:38.994696 containerd[1743]: time="2025-10-31T00:02:38.994430373Z" level=info msg="RemoveContainer for \"4fea37136f4a87c8777851fb1c750d46c234c0d698f68332b4e03c4f5e4db20b\"" Oct 31 00:02:39.008933 containerd[1743]: time="2025-10-31T00:02:39.008782830Z" level=info msg="RemoveContainer for \"4fea37136f4a87c8777851fb1c750d46c234c0d698f68332b4e03c4f5e4db20b\" returns successfully" Oct 31 00:02:39.009065 kubelet[3457]: I1031 00:02:39.009031 3457 scope.go:117] "RemoveContainer" containerID="dc0e20d4f61e4a811308f3edfed0b1ccfa2ab37ca307aecd08b605d4dcb53e49" Oct 31 00:02:39.011267 containerd[1743]: time="2025-10-31T00:02:39.011230433Z" level=info msg="RemoveContainer for \"dc0e20d4f61e4a811308f3edfed0b1ccfa2ab37ca307aecd08b605d4dcb53e49\"" Oct 31 00:02:39.021814 containerd[1743]: time="2025-10-31T00:02:39.021764125Z" level=info msg="RemoveContainer for \"dc0e20d4f61e4a811308f3edfed0b1ccfa2ab37ca307aecd08b605d4dcb53e49\" returns successfully" Oct 31 00:02:39.022086 kubelet[3457]: I1031 00:02:39.022059 3457 scope.go:117] "RemoveContainer" containerID="2d341e62615987e35eb637b4a126c1cf410d7d836c844e7ddaf98b60fcff5d41" Oct 31 00:02:39.022349 containerd[1743]: time="2025-10-31T00:02:39.022315046Z" level=error msg="ContainerStatus for \"2d341e62615987e35eb637b4a126c1cf410d7d836c844e7ddaf98b60fcff5d41\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2d341e62615987e35eb637b4a126c1cf410d7d836c844e7ddaf98b60fcff5d41\": not found" Oct 31 00:02:39.022474 kubelet[3457]: E1031 00:02:39.022449 3457 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2d341e62615987e35eb637b4a126c1cf410d7d836c844e7ddaf98b60fcff5d41\": not found" containerID="2d341e62615987e35eb637b4a126c1cf410d7d836c844e7ddaf98b60fcff5d41" Oct 31 00:02:39.022524 kubelet[3457]: I1031 00:02:39.022480 3457 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2d341e62615987e35eb637b4a126c1cf410d7d836c844e7ddaf98b60fcff5d41"} err="failed to get container status \"2d341e62615987e35eb637b4a126c1cf410d7d836c844e7ddaf98b60fcff5d41\": rpc error: code = NotFound desc = an error occurred when try to find container \"2d341e62615987e35eb637b4a126c1cf410d7d836c844e7ddaf98b60fcff5d41\": not found" Oct 31 00:02:39.022524 kubelet[3457]: I1031 00:02:39.022501 3457 scope.go:117] "RemoveContainer" containerID="eab839ee3bd9eaff52bc88e81db3ea9c9ecf710e921ab9845c841fe46cf52810" Oct 31 00:02:39.023036 kubelet[3457]: E1031 00:02:39.022840 3457 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eab839ee3bd9eaff52bc88e81db3ea9c9ecf710e921ab9845c841fe46cf52810\": not found" containerID="eab839ee3bd9eaff52bc88e81db3ea9c9ecf710e921ab9845c841fe46cf52810" Oct 31 00:02:39.023036 kubelet[3457]: I1031 00:02:39.022862 3457 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eab839ee3bd9eaff52bc88e81db3ea9c9ecf710e921ab9845c841fe46cf52810"} err="failed to get container status \"eab839ee3bd9eaff52bc88e81db3ea9c9ecf710e921ab9845c841fe46cf52810\": rpc error: code = NotFound desc = an error occurred when try to find container \"eab839ee3bd9eaff52bc88e81db3ea9c9ecf710e921ab9845c841fe46cf52810\": not found" Oct 31 00:02:39.023036 kubelet[3457]: I1031 00:02:39.022900 3457 scope.go:117] "RemoveContainer" containerID="9165ba30ba874a4970e6c9358991e42f54b76a004ca89455ccebd2fd237a218e" Oct 31 00:02:39.023147 containerd[1743]: time="2025-10-31T00:02:39.022702646Z" level=error msg="ContainerStatus for \"eab839ee3bd9eaff52bc88e81db3ea9c9ecf710e921ab9845c841fe46cf52810\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eab839ee3bd9eaff52bc88e81db3ea9c9ecf710e921ab9845c841fe46cf52810\": not found" Oct 31 00:02:39.023147 containerd[1743]: time="2025-10-31T00:02:39.023096887Z" level=error msg="ContainerStatus for \"9165ba30ba874a4970e6c9358991e42f54b76a004ca89455ccebd2fd237a218e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9165ba30ba874a4970e6c9358991e42f54b76a004ca89455ccebd2fd237a218e\": not found" Oct 31 00:02:39.023252 kubelet[3457]: E1031 00:02:39.023219 3457 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9165ba30ba874a4970e6c9358991e42f54b76a004ca89455ccebd2fd237a218e\": not found" containerID="9165ba30ba874a4970e6c9358991e42f54b76a004ca89455ccebd2fd237a218e" Oct 31 00:02:39.023281 kubelet[3457]: I1031 00:02:39.023253 3457 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9165ba30ba874a4970e6c9358991e42f54b76a004ca89455ccebd2fd237a218e"} err="failed to get container status \"9165ba30ba874a4970e6c9358991e42f54b76a004ca89455ccebd2fd237a218e\": rpc error: code = NotFound desc = an error occurred when try to find container \"9165ba30ba874a4970e6c9358991e42f54b76a004ca89455ccebd2fd237a218e\": not found" Oct 31 00:02:39.023281 kubelet[3457]: I1031 00:02:39.023271 3457 scope.go:117] "RemoveContainer" containerID="4fea37136f4a87c8777851fb1c750d46c234c0d698f68332b4e03c4f5e4db20b" Oct 31 00:02:39.023510 containerd[1743]: time="2025-10-31T00:02:39.023476607Z" level=error msg="ContainerStatus for \"4fea37136f4a87c8777851fb1c750d46c234c0d698f68332b4e03c4f5e4db20b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4fea37136f4a87c8777851fb1c750d46c234c0d698f68332b4e03c4f5e4db20b\": not found" Oct 31 00:02:39.023703 kubelet[3457]: E1031 00:02:39.023640 3457 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4fea37136f4a87c8777851fb1c750d46c234c0d698f68332b4e03c4f5e4db20b\": not found" containerID="4fea37136f4a87c8777851fb1c750d46c234c0d698f68332b4e03c4f5e4db20b" Oct 31 00:02:39.023754 kubelet[3457]: I1031 00:02:39.023708 3457 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4fea37136f4a87c8777851fb1c750d46c234c0d698f68332b4e03c4f5e4db20b"} err="failed to get container status \"4fea37136f4a87c8777851fb1c750d46c234c0d698f68332b4e03c4f5e4db20b\": rpc error: code = NotFound desc = an error occurred when try to find container \"4fea37136f4a87c8777851fb1c750d46c234c0d698f68332b4e03c4f5e4db20b\": not found" Oct 31 00:02:39.023754 kubelet[3457]: I1031 00:02:39.023727 3457 scope.go:117] "RemoveContainer" containerID="dc0e20d4f61e4a811308f3edfed0b1ccfa2ab37ca307aecd08b605d4dcb53e49" Oct 31 00:02:39.024021 containerd[1743]: time="2025-10-31T00:02:39.023984248Z" level=error msg="ContainerStatus for \"dc0e20d4f61e4a811308f3edfed0b1ccfa2ab37ca307aecd08b605d4dcb53e49\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dc0e20d4f61e4a811308f3edfed0b1ccfa2ab37ca307aecd08b605d4dcb53e49\": not found" Oct 31 00:02:39.024159 kubelet[3457]: E1031 00:02:39.024119 3457 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dc0e20d4f61e4a811308f3edfed0b1ccfa2ab37ca307aecd08b605d4dcb53e49\": not found" containerID="dc0e20d4f61e4a811308f3edfed0b1ccfa2ab37ca307aecd08b605d4dcb53e49" Oct 31 00:02:39.024159 kubelet[3457]: I1031 00:02:39.024146 3457 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dc0e20d4f61e4a811308f3edfed0b1ccfa2ab37ca307aecd08b605d4dcb53e49"} err="failed to get container status \"dc0e20d4f61e4a811308f3edfed0b1ccfa2ab37ca307aecd08b605d4dcb53e49\": rpc error: code = NotFound desc = an error occurred when try to find container \"dc0e20d4f61e4a811308f3edfed0b1ccfa2ab37ca307aecd08b605d4dcb53e49\": not found" Oct 31 00:02:39.910226 sshd[5224]: Connection closed by 10.200.16.10 port 58556 Oct 31 00:02:39.910955 sshd-session[5222]: pam_unix(sshd:session): session closed for user core Oct 31 00:02:39.914466 systemd-logind[1718]: Session 34 logged out. Waiting for processes to exit. Oct 31 00:02:39.914724 systemd[1]: sshd@31-10.200.20.15:22-10.200.16.10:58556.service: Deactivated successfully. Oct 31 00:02:39.916967 systemd[1]: session-34.scope: Deactivated successfully. Oct 31 00:02:39.918910 systemd[1]: session-34.scope: Consumed 1.779s CPU time, 23.6M memory peak. Oct 31 00:02:39.920629 systemd-logind[1718]: Removed session 34. Oct 31 00:02:39.998161 systemd[1]: Started sshd@32-10.200.20.15:22-10.200.16.10:47272.service - OpenSSH per-connection server daemon (10.200.16.10:47272). Oct 31 00:02:40.127371 kubelet[3457]: I1031 00:02:40.127316 3457 setters.go:602] "Node became not ready" node="ci-4230.2.4-n-0164ad71e3" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-31T00:02:40Z","lastTransitionTime":"2025-10-31T00:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Oct 31 00:02:40.376346 kubelet[3457]: I1031 00:02:40.376216 3457 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6275dbb-2c27-4e0f-baea-89ef852cccb7" path="/var/lib/kubelet/pods/c6275dbb-2c27-4e0f-baea-89ef852cccb7/volumes" Oct 31 00:02:40.376691 kubelet[3457]: I1031 00:02:40.376664 3457 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf34bf6e-2f6c-4fcf-863d-7d5010123939" path="/var/lib/kubelet/pods/cf34bf6e-2f6c-4fcf-863d-7d5010123939/volumes" Oct 31 00:02:40.461545 sshd[5387]: Accepted publickey for core from 10.200.16.10 port 47272 ssh2: RSA SHA256:nBkLfspKkDLqOT9SkkASHbt5c8U+GcTwmvPM6OoKUzI Oct 31 00:02:40.463098 sshd-session[5387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:02:40.468139 systemd-logind[1718]: New session 35 of user core. Oct 31 00:02:40.477097 systemd[1]: Started session-35.scope - Session 35 of User core. Oct 31 00:02:42.457935 kubelet[3457]: I1031 00:02:42.454969 3457 memory_manager.go:355] "RemoveStaleState removing state" podUID="cf34bf6e-2f6c-4fcf-863d-7d5010123939" containerName="cilium-agent" Oct 31 00:02:42.457935 kubelet[3457]: I1031 00:02:42.455004 3457 memory_manager.go:355] "RemoveStaleState removing state" podUID="c6275dbb-2c27-4e0f-baea-89ef852cccb7" containerName="cilium-operator" Oct 31 00:02:42.466103 systemd[1]: Created slice kubepods-burstable-poda9db1835_4183_4eb9_a1ed_0339da96ee9e.slice - libcontainer container kubepods-burstable-poda9db1835_4183_4eb9_a1ed_0339da96ee9e.slice. Oct 31 00:02:42.512940 kubelet[3457]: I1031 00:02:42.512906 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a9db1835-4183-4eb9-a1ed-0339da96ee9e-cilium-run\") pod \"cilium-xmbqk\" (UID: \"a9db1835-4183-4eb9-a1ed-0339da96ee9e\") " pod="kube-system/cilium-xmbqk" Oct 31 00:02:42.513260 kubelet[3457]: I1031 00:02:42.513120 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9db1835-4183-4eb9-a1ed-0339da96ee9e-lib-modules\") pod \"cilium-xmbqk\" (UID: \"a9db1835-4183-4eb9-a1ed-0339da96ee9e\") " pod="kube-system/cilium-xmbqk" Oct 31 00:02:42.513260 kubelet[3457]: I1031 00:02:42.513148 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a9db1835-4183-4eb9-a1ed-0339da96ee9e-cilium-config-path\") pod \"cilium-xmbqk\" (UID: \"a9db1835-4183-4eb9-a1ed-0339da96ee9e\") " pod="kube-system/cilium-xmbqk" Oct 31 00:02:42.513260 kubelet[3457]: I1031 00:02:42.513169 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a9db1835-4183-4eb9-a1ed-0339da96ee9e-hubble-tls\") pod \"cilium-xmbqk\" (UID: \"a9db1835-4183-4eb9-a1ed-0339da96ee9e\") " pod="kube-system/cilium-xmbqk" Oct 31 00:02:42.513260 kubelet[3457]: I1031 00:02:42.513189 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njvgn\" (UniqueName: \"kubernetes.io/projected/a9db1835-4183-4eb9-a1ed-0339da96ee9e-kube-api-access-njvgn\") pod \"cilium-xmbqk\" (UID: \"a9db1835-4183-4eb9-a1ed-0339da96ee9e\") " pod="kube-system/cilium-xmbqk" Oct 31 00:02:42.513260 kubelet[3457]: I1031 00:02:42.513210 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a9db1835-4183-4eb9-a1ed-0339da96ee9e-cilium-ipsec-secrets\") pod \"cilium-xmbqk\" (UID: \"a9db1835-4183-4eb9-a1ed-0339da96ee9e\") " pod="kube-system/cilium-xmbqk" Oct 31 00:02:42.513260 kubelet[3457]: I1031 00:02:42.513227 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a9db1835-4183-4eb9-a1ed-0339da96ee9e-cilium-cgroup\") pod \"cilium-xmbqk\" (UID: \"a9db1835-4183-4eb9-a1ed-0339da96ee9e\") " pod="kube-system/cilium-xmbqk" Oct 31 00:02:42.513822 kubelet[3457]: I1031 00:02:42.513263 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a9db1835-4183-4eb9-a1ed-0339da96ee9e-bpf-maps\") pod \"cilium-xmbqk\" (UID: \"a9db1835-4183-4eb9-a1ed-0339da96ee9e\") " pod="kube-system/cilium-xmbqk" Oct 31 00:02:42.513822 kubelet[3457]: I1031 00:02:42.513289 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a9db1835-4183-4eb9-a1ed-0339da96ee9e-cni-path\") pod \"cilium-xmbqk\" (UID: \"a9db1835-4183-4eb9-a1ed-0339da96ee9e\") " pod="kube-system/cilium-xmbqk" Oct 31 00:02:42.513822 kubelet[3457]: I1031 00:02:42.513338 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a9db1835-4183-4eb9-a1ed-0339da96ee9e-etc-cni-netd\") pod \"cilium-xmbqk\" (UID: \"a9db1835-4183-4eb9-a1ed-0339da96ee9e\") " pod="kube-system/cilium-xmbqk" Oct 31 00:02:42.513822 kubelet[3457]: I1031 00:02:42.513360 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9db1835-4183-4eb9-a1ed-0339da96ee9e-xtables-lock\") pod \"cilium-xmbqk\" (UID: \"a9db1835-4183-4eb9-a1ed-0339da96ee9e\") " pod="kube-system/cilium-xmbqk" Oct 31 00:02:42.513822 kubelet[3457]: I1031 00:02:42.513379 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a9db1835-4183-4eb9-a1ed-0339da96ee9e-host-proc-sys-kernel\") pod \"cilium-xmbqk\" (UID: \"a9db1835-4183-4eb9-a1ed-0339da96ee9e\") " pod="kube-system/cilium-xmbqk" Oct 31 00:02:42.513822 kubelet[3457]: I1031 00:02:42.513396 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a9db1835-4183-4eb9-a1ed-0339da96ee9e-hostproc\") pod \"cilium-xmbqk\" (UID: \"a9db1835-4183-4eb9-a1ed-0339da96ee9e\") " pod="kube-system/cilium-xmbqk" Oct 31 00:02:42.513984 kubelet[3457]: I1031 00:02:42.513419 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a9db1835-4183-4eb9-a1ed-0339da96ee9e-host-proc-sys-net\") pod \"cilium-xmbqk\" (UID: \"a9db1835-4183-4eb9-a1ed-0339da96ee9e\") " pod="kube-system/cilium-xmbqk" Oct 31 00:02:42.513984 kubelet[3457]: I1031 00:02:42.513471 3457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a9db1835-4183-4eb9-a1ed-0339da96ee9e-clustermesh-secrets\") pod \"cilium-xmbqk\" (UID: \"a9db1835-4183-4eb9-a1ed-0339da96ee9e\") " pod="kube-system/cilium-xmbqk" Oct 31 00:02:42.528932 sshd[5391]: Connection closed by 10.200.16.10 port 47272 Oct 31 00:02:42.529112 sshd-session[5387]: pam_unix(sshd:session): session closed for user core Oct 31 00:02:42.533755 systemd[1]: sshd@32-10.200.20.15:22-10.200.16.10:47272.service: Deactivated successfully. Oct 31 00:02:42.535508 systemd[1]: session-35.scope: Deactivated successfully. Oct 31 00:02:42.535801 systemd[1]: session-35.scope: Consumed 1.633s CPU time, 25.7M memory peak. Oct 31 00:02:42.536436 systemd-logind[1718]: Session 35 logged out. Waiting for processes to exit. Oct 31 00:02:42.539025 systemd-logind[1718]: Removed session 35. Oct 31 00:02:42.616195 systemd[1]: Started sshd@33-10.200.20.15:22-10.200.16.10:47282.service - OpenSSH per-connection server daemon (10.200.16.10:47282). Oct 31 00:02:42.772277 containerd[1743]: time="2025-10-31T00:02:42.772163676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xmbqk,Uid:a9db1835-4183-4eb9-a1ed-0339da96ee9e,Namespace:kube-system,Attempt:0,}" Oct 31 00:02:42.842607 containerd[1743]: time="2025-10-31T00:02:42.842348916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:02:42.842607 containerd[1743]: time="2025-10-31T00:02:42.842403916Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:02:42.842607 containerd[1743]: time="2025-10-31T00:02:42.842418476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:02:42.842607 containerd[1743]: time="2025-10-31T00:02:42.842506556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:02:42.864104 systemd[1]: Started cri-containerd-084c2b95f0c9fbe56d5c00c6f4001e9e6702d81c673052a8575f6abbaf43c040.scope - libcontainer container 084c2b95f0c9fbe56d5c00c6f4001e9e6702d81c673052a8575f6abbaf43c040. Oct 31 00:02:42.889327 containerd[1743]: time="2025-10-31T00:02:42.889252209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xmbqk,Uid:a9db1835-4183-4eb9-a1ed-0339da96ee9e,Namespace:kube-system,Attempt:0,} returns sandbox id \"084c2b95f0c9fbe56d5c00c6f4001e9e6702d81c673052a8575f6abbaf43c040\"" Oct 31 00:02:42.895038 containerd[1743]: time="2025-10-31T00:02:42.894900935Z" level=info msg="CreateContainer within sandbox \"084c2b95f0c9fbe56d5c00c6f4001e9e6702d81c673052a8575f6abbaf43c040\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 31 00:02:42.949836 containerd[1743]: time="2025-10-31T00:02:42.949782717Z" level=info msg="CreateContainer within sandbox \"084c2b95f0c9fbe56d5c00c6f4001e9e6702d81c673052a8575f6abbaf43c040\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2cd3bc2fe30ab0b38c6cf34e491ede2ddf8f5d3ab385dda23052475331dcf6ad\"" Oct 31 00:02:42.950906 containerd[1743]: time="2025-10-31T00:02:42.950580838Z" level=info msg="StartContainer for \"2cd3bc2fe30ab0b38c6cf34e491ede2ddf8f5d3ab385dda23052475331dcf6ad\"" Oct 31 00:02:42.975088 systemd[1]: Started cri-containerd-2cd3bc2fe30ab0b38c6cf34e491ede2ddf8f5d3ab385dda23052475331dcf6ad.scope - libcontainer container 2cd3bc2fe30ab0b38c6cf34e491ede2ddf8f5d3ab385dda23052475331dcf6ad. Oct 31 00:02:43.015134 systemd[1]: cri-containerd-2cd3bc2fe30ab0b38c6cf34e491ede2ddf8f5d3ab385dda23052475331dcf6ad.scope: Deactivated successfully. Oct 31 00:02:43.019279 containerd[1743]: time="2025-10-31T00:02:43.019234915Z" level=info msg="StartContainer for \"2cd3bc2fe30ab0b38c6cf34e491ede2ddf8f5d3ab385dda23052475331dcf6ad\" returns successfully" Oct 31 00:02:43.100123 sshd[5402]: Accepted publickey for core from 10.200.16.10 port 47282 ssh2: RSA SHA256:nBkLfspKkDLqOT9SkkASHbt5c8U+GcTwmvPM6OoKUzI Oct 31 00:02:43.101671 sshd-session[5402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:02:43.106941 systemd-logind[1718]: New session 36 of user core. Oct 31 00:02:43.112076 systemd[1]: Started session-36.scope - Session 36 of User core. Oct 31 00:02:43.115573 containerd[1743]: time="2025-10-31T00:02:43.115330704Z" level=info msg="shim disconnected" id=2cd3bc2fe30ab0b38c6cf34e491ede2ddf8f5d3ab385dda23052475331dcf6ad namespace=k8s.io Oct 31 00:02:43.115573 containerd[1743]: time="2025-10-31T00:02:43.115393584Z" level=warning msg="cleaning up after shim disconnected" id=2cd3bc2fe30ab0b38c6cf34e491ede2ddf8f5d3ab385dda23052475331dcf6ad namespace=k8s.io Oct 31 00:02:43.115573 containerd[1743]: time="2025-10-31T00:02:43.115401824Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 00:02:43.127125 containerd[1743]: time="2025-10-31T00:02:43.127053397Z" level=warning msg="cleanup warnings time=\"2025-10-31T00:02:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 31 00:02:43.429929 sshd[5497]: Connection closed by 10.200.16.10 port 47282 Oct 31 00:02:43.430543 sshd-session[5402]: pam_unix(sshd:session): session closed for user core Oct 31 00:02:43.434301 systemd[1]: sshd@33-10.200.20.15:22-10.200.16.10:47282.service: Deactivated successfully. Oct 31 00:02:43.436015 systemd[1]: session-36.scope: Deactivated successfully. Oct 31 00:02:43.437616 systemd-logind[1718]: Session 36 logged out. Waiting for processes to exit. Oct 31 00:02:43.438452 systemd-logind[1718]: Removed session 36. Oct 31 00:02:43.513206 systemd[1]: Started sshd@34-10.200.20.15:22-10.200.16.10:47286.service - OpenSSH per-connection server daemon (10.200.16.10:47286). Oct 31 00:02:43.827727 kubelet[3457]: E1031 00:02:43.827593 3457 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 31 00:02:43.940184 sshd[5516]: Accepted publickey for core from 10.200.16.10 port 47286 ssh2: RSA SHA256:nBkLfspKkDLqOT9SkkASHbt5c8U+GcTwmvPM6OoKUzI Oct 31 00:02:43.943583 sshd-session[5516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:02:43.946137 containerd[1743]: time="2025-10-31T00:02:43.946080521Z" level=info msg="CreateContainer within sandbox \"084c2b95f0c9fbe56d5c00c6f4001e9e6702d81c673052a8575f6abbaf43c040\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 31 00:02:43.952721 systemd-logind[1718]: New session 37 of user core. Oct 31 00:02:43.957043 systemd[1]: Started session-37.scope - Session 37 of User core. Oct 31 00:02:43.980843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2252534214.mount: Deactivated successfully. Oct 31 00:02:43.989857 containerd[1743]: time="2025-10-31T00:02:43.989806050Z" level=info msg="CreateContainer within sandbox \"084c2b95f0c9fbe56d5c00c6f4001e9e6702d81c673052a8575f6abbaf43c040\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6d5cb2e298e99aa2a60093b20ec116a475b64daa3096cecb805ba1e1fd353fff\"" Oct 31 00:02:43.992508 containerd[1743]: time="2025-10-31T00:02:43.992465813Z" level=info msg="StartContainer for \"6d5cb2e298e99aa2a60093b20ec116a475b64daa3096cecb805ba1e1fd353fff\"" Oct 31 00:02:44.038132 systemd[1]: Started cri-containerd-6d5cb2e298e99aa2a60093b20ec116a475b64daa3096cecb805ba1e1fd353fff.scope - libcontainer container 6d5cb2e298e99aa2a60093b20ec116a475b64daa3096cecb805ba1e1fd353fff. Oct 31 00:02:44.070706 containerd[1743]: time="2025-10-31T00:02:44.070594621Z" level=info msg="StartContainer for \"6d5cb2e298e99aa2a60093b20ec116a475b64daa3096cecb805ba1e1fd353fff\" returns successfully" Oct 31 00:02:44.074127 systemd[1]: cri-containerd-6d5cb2e298e99aa2a60093b20ec116a475b64daa3096cecb805ba1e1fd353fff.scope: Deactivated successfully. Oct 31 00:02:44.117761 containerd[1743]: time="2025-10-31T00:02:44.117495714Z" level=info msg="shim disconnected" id=6d5cb2e298e99aa2a60093b20ec116a475b64daa3096cecb805ba1e1fd353fff namespace=k8s.io Oct 31 00:02:44.117761 containerd[1743]: time="2025-10-31T00:02:44.117654875Z" level=warning msg="cleaning up after shim disconnected" id=6d5cb2e298e99aa2a60093b20ec116a475b64daa3096cecb805ba1e1fd353fff namespace=k8s.io Oct 31 00:02:44.117761 containerd[1743]: time="2025-10-31T00:02:44.117668595Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 00:02:44.624094 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d5cb2e298e99aa2a60093b20ec116a475b64daa3096cecb805ba1e1fd353fff-rootfs.mount: Deactivated successfully. Oct 31 00:02:44.952126 containerd[1743]: time="2025-10-31T00:02:44.951542895Z" level=info msg="CreateContainer within sandbox \"084c2b95f0c9fbe56d5c00c6f4001e9e6702d81c673052a8575f6abbaf43c040\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 31 00:02:45.018963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3869012340.mount: Deactivated successfully. Oct 31 00:02:45.034851 containerd[1743]: time="2025-10-31T00:02:45.034793389Z" level=info msg="CreateContainer within sandbox \"084c2b95f0c9fbe56d5c00c6f4001e9e6702d81c673052a8575f6abbaf43c040\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2fc6ba20fe5f111e807ecb53d08ff795d3bed61ece9d9e96f2732b5a79985004\"" Oct 31 00:02:45.035804 containerd[1743]: time="2025-10-31T00:02:45.035657950Z" level=info msg="StartContainer for \"2fc6ba20fe5f111e807ecb53d08ff795d3bed61ece9d9e96f2732b5a79985004\"" Oct 31 00:02:45.064103 systemd[1]: Started cri-containerd-2fc6ba20fe5f111e807ecb53d08ff795d3bed61ece9d9e96f2732b5a79985004.scope - libcontainer container 2fc6ba20fe5f111e807ecb53d08ff795d3bed61ece9d9e96f2732b5a79985004. Oct 31 00:02:45.104209 systemd[1]: cri-containerd-2fc6ba20fe5f111e807ecb53d08ff795d3bed61ece9d9e96f2732b5a79985004.scope: Deactivated successfully. Oct 31 00:02:45.105084 containerd[1743]: time="2025-10-31T00:02:45.104757268Z" level=info msg="StartContainer for \"2fc6ba20fe5f111e807ecb53d08ff795d3bed61ece9d9e96f2732b5a79985004\" returns successfully" Oct 31 00:02:45.145627 containerd[1743]: time="2025-10-31T00:02:45.145563954Z" level=info msg="shim disconnected" id=2fc6ba20fe5f111e807ecb53d08ff795d3bed61ece9d9e96f2732b5a79985004 namespace=k8s.io Oct 31 00:02:45.145627 containerd[1743]: time="2025-10-31T00:02:45.145623634Z" level=warning msg="cleaning up after shim disconnected" id=2fc6ba20fe5f111e807ecb53d08ff795d3bed61ece9d9e96f2732b5a79985004 namespace=k8s.io Oct 31 00:02:45.145627 containerd[1743]: time="2025-10-31T00:02:45.145633474Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 00:02:45.625273 systemd[1]: run-containerd-runc-k8s.io-2fc6ba20fe5f111e807ecb53d08ff795d3bed61ece9d9e96f2732b5a79985004-runc.jbjjtY.mount: Deactivated successfully. Oct 31 00:02:45.625744 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2fc6ba20fe5f111e807ecb53d08ff795d3bed61ece9d9e96f2732b5a79985004-rootfs.mount: Deactivated successfully. Oct 31 00:02:45.954630 containerd[1743]: time="2025-10-31T00:02:45.954532027Z" level=info msg="CreateContainer within sandbox \"084c2b95f0c9fbe56d5c00c6f4001e9e6702d81c673052a8575f6abbaf43c040\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 31 00:02:46.005551 containerd[1743]: time="2025-10-31T00:02:46.005499485Z" level=info msg="CreateContainer within sandbox \"084c2b95f0c9fbe56d5c00c6f4001e9e6702d81c673052a8575f6abbaf43c040\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bc1c4b0b69b150a138a7c42532728c0a2df43c707be76f4bb4e65e3029f511f5\"" Oct 31 00:02:46.008376 containerd[1743]: time="2025-10-31T00:02:46.008330568Z" level=info msg="StartContainer for \"bc1c4b0b69b150a138a7c42532728c0a2df43c707be76f4bb4e65e3029f511f5\"" Oct 31 00:02:46.042141 systemd[1]: Started cri-containerd-bc1c4b0b69b150a138a7c42532728c0a2df43c707be76f4bb4e65e3029f511f5.scope - libcontainer container bc1c4b0b69b150a138a7c42532728c0a2df43c707be76f4bb4e65e3029f511f5. Oct 31 00:02:46.065665 systemd[1]: cri-containerd-bc1c4b0b69b150a138a7c42532728c0a2df43c707be76f4bb4e65e3029f511f5.scope: Deactivated successfully. Oct 31 00:02:46.073487 containerd[1743]: time="2025-10-31T00:02:46.073220441Z" level=info msg="StartContainer for \"bc1c4b0b69b150a138a7c42532728c0a2df43c707be76f4bb4e65e3029f511f5\" returns successfully" Oct 31 00:02:46.107580 containerd[1743]: time="2025-10-31T00:02:46.107523320Z" level=info msg="shim disconnected" id=bc1c4b0b69b150a138a7c42532728c0a2df43c707be76f4bb4e65e3029f511f5 namespace=k8s.io Oct 31 00:02:46.108130 containerd[1743]: time="2025-10-31T00:02:46.107940560Z" level=warning msg="cleaning up after shim disconnected" id=bc1c4b0b69b150a138a7c42532728c0a2df43c707be76f4bb4e65e3029f511f5 namespace=k8s.io Oct 31 00:02:46.108130 containerd[1743]: time="2025-10-31T00:02:46.107959880Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 00:02:46.624282 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc1c4b0b69b150a138a7c42532728c0a2df43c707be76f4bb4e65e3029f511f5-rootfs.mount: Deactivated successfully. Oct 31 00:02:46.957770 containerd[1743]: time="2025-10-31T00:02:46.957723999Z" level=info msg="CreateContainer within sandbox \"084c2b95f0c9fbe56d5c00c6f4001e9e6702d81c673052a8575f6abbaf43c040\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 31 00:02:47.005631 containerd[1743]: time="2025-10-31T00:02:47.005256133Z" level=info msg="CreateContainer within sandbox \"084c2b95f0c9fbe56d5c00c6f4001e9e6702d81c673052a8575f6abbaf43c040\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b67dffe5d2cfa7e77970f96d364c9c74e1abeb689d68082356e89629c2f8841f\"" Oct 31 00:02:47.005962 containerd[1743]: time="2025-10-31T00:02:47.005934293Z" level=info msg="StartContainer for \"b67dffe5d2cfa7e77970f96d364c9c74e1abeb689d68082356e89629c2f8841f\"" Oct 31 00:02:47.041058 systemd[1]: run-containerd-runc-k8s.io-b67dffe5d2cfa7e77970f96d364c9c74e1abeb689d68082356e89629c2f8841f-runc.4gqXyc.mount: Deactivated successfully. Oct 31 00:02:47.052152 systemd[1]: Started cri-containerd-b67dffe5d2cfa7e77970f96d364c9c74e1abeb689d68082356e89629c2f8841f.scope - libcontainer container b67dffe5d2cfa7e77970f96d364c9c74e1abeb689d68082356e89629c2f8841f. Oct 31 00:02:47.088567 containerd[1743]: time="2025-10-31T00:02:47.088513307Z" level=info msg="StartContainer for \"b67dffe5d2cfa7e77970f96d364c9c74e1abeb689d68082356e89629c2f8841f\" returns successfully" Oct 31 00:02:47.582154 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Oct 31 00:02:50.279952 systemd-networkd[1341]: lxc_health: Link UP Oct 31 00:02:50.280575 systemd-networkd[1341]: lxc_health: Gained carrier Oct 31 00:02:50.795970 kubelet[3457]: I1031 00:02:50.795876 3457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xmbqk" podStartSLOduration=8.795854846 podStartE2EDuration="8.795854846s" podCreationTimestamp="2025-10-31 00:02:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:02:47.982487955 +0000 UTC m=+287.799384673" watchObservedRunningTime="2025-10-31 00:02:50.795854846 +0000 UTC m=+290.612751564" Oct 31 00:02:51.549195 systemd-networkd[1341]: lxc_health: Gained IPv6LL Oct 31 00:02:52.744769 kubelet[3457]: E1031 00:02:52.744723 3457 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:46310->127.0.0.1:45317: write tcp 127.0.0.1:46310->127.0.0.1:45317: write: broken pipe Oct 31 00:02:54.821676 systemd[1]: run-containerd-runc-k8s.io-b67dffe5d2cfa7e77970f96d364c9c74e1abeb689d68082356e89629c2f8841f-runc.hpEDZM.mount: Deactivated successfully. Oct 31 00:02:57.046769 sshd[5518]: Connection closed by 10.200.16.10 port 47286 Oct 31 00:02:57.047432 sshd-session[5516]: pam_unix(sshd:session): session closed for user core Oct 31 00:02:57.051013 systemd[1]: sshd@34-10.200.20.15:22-10.200.16.10:47286.service: Deactivated successfully. Oct 31 00:02:57.054137 systemd[1]: session-37.scope: Deactivated successfully. Oct 31 00:02:57.055381 systemd-logind[1718]: Session 37 logged out. Waiting for processes to exit. Oct 31 00:02:57.056749 systemd-logind[1718]: Removed session 37.