Sep 4 23:59:14.360264 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 4 23:59:14.360288 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Thu Sep 4 22:21:25 -00 2025 Sep 4 23:59:14.360297 kernel: KASLR enabled Sep 4 23:59:14.360303 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Sep 4 23:59:14.360311 kernel: printk: bootconsole [pl11] enabled Sep 4 23:59:14.360317 kernel: efi: EFI v2.7 by EDK II Sep 4 23:59:14.360324 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Sep 4 23:59:14.360331 kernel: random: crng init done Sep 4 23:59:14.360337 kernel: secureboot: Secure boot disabled Sep 4 23:59:14.360353 kernel: ACPI: Early table checksum verification disabled Sep 4 23:59:14.360360 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Sep 4 23:59:14.360367 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 23:59:14.360373 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 23:59:14.360381 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 4 23:59:14.360389 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 23:59:14.360396 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 23:59:14.360402 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 23:59:14.360410 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 23:59:14.360417 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 23:59:14.360423 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 23:59:14.360430 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Sep 4 23:59:14.360436 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 23:59:14.360443 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Sep 4 23:59:14.360449 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Sep 4 23:59:14.360456 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Sep 4 23:59:14.360463 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Sep 4 23:59:14.360482 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Sep 4 23:59:14.360499 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Sep 4 23:59:14.360508 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Sep 4 23:59:14.360526 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Sep 4 23:59:14.360533 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Sep 4 23:59:14.360539 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Sep 4 23:59:14.360546 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Sep 4 23:59:14.360552 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Sep 4 23:59:14.360559 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Sep 4 23:59:14.360566 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Sep 4 23:59:14.360572 kernel: Zone ranges: Sep 4 23:59:14.360579 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Sep 4 23:59:14.360585 kernel: DMA32 empty Sep 4 23:59:14.360592 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Sep 4 23:59:14.360603 kernel: Movable zone start for each node Sep 4 23:59:14.360610 kernel: Early memory node ranges Sep 4 23:59:14.360617 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Sep 4 23:59:14.360624 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Sep 4 23:59:14.360631 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Sep 4 23:59:14.360640 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Sep 4 23:59:14.360647 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Sep 4 23:59:14.360654 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Sep 4 23:59:14.360661 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Sep 4 23:59:14.360668 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Sep 4 23:59:14.360675 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Sep 4 23:59:14.360682 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Sep 4 23:59:14.360689 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Sep 4 23:59:14.360696 kernel: psci: probing for conduit method from ACPI. Sep 4 23:59:14.360703 kernel: psci: PSCIv1.1 detected in firmware. Sep 4 23:59:14.360710 kernel: psci: Using standard PSCI v0.2 function IDs Sep 4 23:59:14.360717 kernel: psci: MIGRATE_INFO_TYPE not supported. Sep 4 23:59:14.360725 kernel: psci: SMC Calling Convention v1.4 Sep 4 23:59:14.360732 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Sep 4 23:59:14.360739 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Sep 4 23:59:14.360746 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 4 23:59:14.360753 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 4 23:59:14.360760 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 4 23:59:14.360767 kernel: Detected PIPT I-cache on CPU0 Sep 4 23:59:14.360774 kernel: CPU features: detected: GIC system register CPU interface Sep 4 23:59:14.360781 kernel: CPU features: detected: Hardware dirty bit management Sep 4 23:59:14.360788 kernel: CPU features: detected: Spectre-BHB Sep 4 23:59:14.360795 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 4 23:59:14.360803 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 4 23:59:14.360810 kernel: CPU features: detected: ARM erratum 1418040 Sep 4 23:59:14.360817 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Sep 4 23:59:14.360824 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 4 23:59:14.360831 kernel: alternatives: applying boot alternatives Sep 4 23:59:14.360839 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=0304960b24e314f6095f7d8ad705a9bc0a9a4a34f7817da10ea634466a73d86e Sep 4 23:59:14.360847 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 23:59:14.360854 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 23:59:14.360861 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 23:59:14.360868 kernel: Fallback order for Node 0: 0 Sep 4 23:59:14.360875 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Sep 4 23:59:14.360883 kernel: Policy zone: Normal Sep 4 23:59:14.360890 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 23:59:14.360897 kernel: software IO TLB: area num 2. Sep 4 23:59:14.360904 kernel: software IO TLB: mapped [mem 0x0000000036530000-0x000000003a530000] (64MB) Sep 4 23:59:14.360911 kernel: Memory: 3983524K/4194160K available (10368K kernel code, 2186K rwdata, 8104K rodata, 38400K init, 897K bss, 210636K reserved, 0K cma-reserved) Sep 4 23:59:14.360918 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 23:59:14.360925 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 23:59:14.360933 kernel: rcu: RCU event tracing is enabled. Sep 4 23:59:14.360940 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 23:59:14.360947 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 23:59:14.360954 kernel: Tracing variant of Tasks RCU enabled. Sep 4 23:59:14.360963 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 23:59:14.360970 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 23:59:14.360977 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 4 23:59:14.360984 kernel: GICv3: 960 SPIs implemented Sep 4 23:59:14.360991 kernel: GICv3: 0 Extended SPIs implemented Sep 4 23:59:14.360997 kernel: Root IRQ handler: gic_handle_irq Sep 4 23:59:14.361004 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 4 23:59:14.361011 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Sep 4 23:59:14.361018 kernel: ITS: No ITS available, not enabling LPIs Sep 4 23:59:14.361025 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 23:59:14.361032 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 23:59:14.361039 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 4 23:59:14.361048 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 4 23:59:14.361055 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 4 23:59:14.361062 kernel: Console: colour dummy device 80x25 Sep 4 23:59:14.361070 kernel: printk: console [tty1] enabled Sep 4 23:59:14.361077 kernel: ACPI: Core revision 20230628 Sep 4 23:59:14.361084 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 4 23:59:14.361091 kernel: pid_max: default: 32768 minimum: 301 Sep 4 23:59:14.361099 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 4 23:59:14.361106 kernel: landlock: Up and running. Sep 4 23:59:14.361114 kernel: SELinux: Initializing. Sep 4 23:59:14.361122 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 23:59:14.361129 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 23:59:14.361136 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 23:59:14.361144 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 23:59:14.361151 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Sep 4 23:59:14.361158 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Sep 4 23:59:14.361172 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 4 23:59:14.361180 kernel: rcu: Hierarchical SRCU implementation. Sep 4 23:59:14.361187 kernel: rcu: Max phase no-delay instances is 400. Sep 4 23:59:14.361195 kernel: Remapping and enabling EFI services. Sep 4 23:59:14.361202 kernel: smp: Bringing up secondary CPUs ... Sep 4 23:59:14.361211 kernel: Detected PIPT I-cache on CPU1 Sep 4 23:59:14.361219 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Sep 4 23:59:14.361227 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 23:59:14.361234 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 4 23:59:14.361242 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 23:59:14.361251 kernel: SMP: Total of 2 processors activated. Sep 4 23:59:14.361273 kernel: CPU features: detected: 32-bit EL0 Support Sep 4 23:59:14.361285 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Sep 4 23:59:14.361293 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 4 23:59:14.361300 kernel: CPU features: detected: CRC32 instructions Sep 4 23:59:14.361308 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 4 23:59:14.361316 kernel: CPU features: detected: LSE atomic instructions Sep 4 23:59:14.361323 kernel: CPU features: detected: Privileged Access Never Sep 4 23:59:14.361331 kernel: CPU: All CPU(s) started at EL1 Sep 4 23:59:14.361341 kernel: alternatives: applying system-wide alternatives Sep 4 23:59:14.361348 kernel: devtmpfs: initialized Sep 4 23:59:14.361356 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 23:59:14.361363 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 23:59:14.361371 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 23:59:14.361378 kernel: SMBIOS 3.1.0 present. Sep 4 23:59:14.361386 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 4 23:59:14.361393 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 23:59:14.361401 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 4 23:59:14.361410 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 4 23:59:14.361418 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 4 23:59:14.361425 kernel: audit: initializing netlink subsys (disabled) Sep 4 23:59:14.361433 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Sep 4 23:59:14.361440 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 23:59:14.361448 kernel: cpuidle: using governor menu Sep 4 23:59:14.361455 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 4 23:59:14.361463 kernel: ASID allocator initialised with 32768 entries Sep 4 23:59:14.361471 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 23:59:14.361480 kernel: Serial: AMBA PL011 UART driver Sep 4 23:59:14.361487 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 4 23:59:14.361495 kernel: Modules: 0 pages in range for non-PLT usage Sep 4 23:59:14.361502 kernel: Modules: 509248 pages in range for PLT usage Sep 4 23:59:14.361967 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 23:59:14.361988 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 23:59:14.361996 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 4 23:59:14.362004 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 4 23:59:14.362011 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 23:59:14.362024 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 23:59:14.362031 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 4 23:59:14.362039 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 4 23:59:14.362047 kernel: ACPI: Added _OSI(Module Device) Sep 4 23:59:14.362054 kernel: ACPI: Added _OSI(Processor Device) Sep 4 23:59:14.362062 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 23:59:14.362069 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 23:59:14.362077 kernel: ACPI: Interpreter enabled Sep 4 23:59:14.362084 kernel: ACPI: Using GIC for interrupt routing Sep 4 23:59:14.362094 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Sep 4 23:59:14.362101 kernel: printk: console [ttyAMA0] enabled Sep 4 23:59:14.362109 kernel: printk: bootconsole [pl11] disabled Sep 4 23:59:14.362117 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Sep 4 23:59:14.362124 kernel: iommu: Default domain type: Translated Sep 4 23:59:14.362132 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 4 23:59:14.362140 kernel: efivars: Registered efivars operations Sep 4 23:59:14.362147 kernel: vgaarb: loaded Sep 4 23:59:14.362155 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 4 23:59:14.362164 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 23:59:14.362172 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 23:59:14.362179 kernel: pnp: PnP ACPI init Sep 4 23:59:14.362187 kernel: pnp: PnP ACPI: found 0 devices Sep 4 23:59:14.362195 kernel: NET: Registered PF_INET protocol family Sep 4 23:59:14.362202 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 23:59:14.362210 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 23:59:14.362217 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 23:59:14.362225 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 23:59:14.362234 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 23:59:14.362242 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 23:59:14.362250 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 23:59:14.362257 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 23:59:14.362265 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 23:59:14.362273 kernel: PCI: CLS 0 bytes, default 64 Sep 4 23:59:14.362280 kernel: kvm [1]: HYP mode not available Sep 4 23:59:14.362288 kernel: Initialise system trusted keyrings Sep 4 23:59:14.362295 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 23:59:14.362305 kernel: Key type asymmetric registered Sep 4 23:59:14.362312 kernel: Asymmetric key parser 'x509' registered Sep 4 23:59:14.362320 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 4 23:59:14.362327 kernel: io scheduler mq-deadline registered Sep 4 23:59:14.362335 kernel: io scheduler kyber registered Sep 4 23:59:14.362342 kernel: io scheduler bfq registered Sep 4 23:59:14.362350 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 23:59:14.362357 kernel: thunder_xcv, ver 1.0 Sep 4 23:59:14.362365 kernel: thunder_bgx, ver 1.0 Sep 4 23:59:14.362374 kernel: nicpf, ver 1.0 Sep 4 23:59:14.362381 kernel: nicvf, ver 1.0 Sep 4 23:59:14.368278 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 4 23:59:14.368408 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-04T23:59:13 UTC (1757030353) Sep 4 23:59:14.368421 kernel: efifb: probing for efifb Sep 4 23:59:14.368429 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 4 23:59:14.368437 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 4 23:59:14.368444 kernel: efifb: scrolling: redraw Sep 4 23:59:14.368457 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 4 23:59:14.368465 kernel: Console: switching to colour frame buffer device 128x48 Sep 4 23:59:14.368473 kernel: fb0: EFI VGA frame buffer device Sep 4 23:59:14.368480 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Sep 4 23:59:14.368488 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 23:59:14.368495 kernel: No ACPI PMU IRQ for CPU0 Sep 4 23:59:14.368503 kernel: No ACPI PMU IRQ for CPU1 Sep 4 23:59:14.368528 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Sep 4 23:59:14.368541 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 4 23:59:14.368552 kernel: watchdog: Hard watchdog permanently disabled Sep 4 23:59:14.368559 kernel: NET: Registered PF_INET6 protocol family Sep 4 23:59:14.368567 kernel: Segment Routing with IPv6 Sep 4 23:59:14.368575 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 23:59:14.368583 kernel: NET: Registered PF_PACKET protocol family Sep 4 23:59:14.368590 kernel: Key type dns_resolver registered Sep 4 23:59:14.368598 kernel: registered taskstats version 1 Sep 4 23:59:14.368606 kernel: Loading compiled-in X.509 certificates Sep 4 23:59:14.368614 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: 83306acb9da7bc81cc6aa49a1c622f78672939c0' Sep 4 23:59:14.368623 kernel: Key type .fscrypt registered Sep 4 23:59:14.368631 kernel: Key type fscrypt-provisioning registered Sep 4 23:59:14.368639 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 23:59:14.368646 kernel: ima: Allocated hash algorithm: sha1 Sep 4 23:59:14.368654 kernel: ima: No architecture policies found Sep 4 23:59:14.368662 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 4 23:59:14.368669 kernel: clk: Disabling unused clocks Sep 4 23:59:14.368677 kernel: Freeing unused kernel memory: 38400K Sep 4 23:59:14.368684 kernel: Run /init as init process Sep 4 23:59:14.368694 kernel: with arguments: Sep 4 23:59:14.368702 kernel: /init Sep 4 23:59:14.368709 kernel: with environment: Sep 4 23:59:14.368726 kernel: HOME=/ Sep 4 23:59:14.368747 kernel: TERM=linux Sep 4 23:59:14.368755 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 23:59:14.368764 systemd[1]: Successfully made /usr/ read-only. Sep 4 23:59:14.368776 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 23:59:14.368790 systemd[1]: Detected virtualization microsoft. Sep 4 23:59:14.368800 systemd[1]: Detected architecture arm64. Sep 4 23:59:14.368811 systemd[1]: Running in initrd. Sep 4 23:59:14.368819 systemd[1]: No hostname configured, using default hostname. Sep 4 23:59:14.368828 systemd[1]: Hostname set to . Sep 4 23:59:14.368836 systemd[1]: Initializing machine ID from random generator. Sep 4 23:59:14.368844 systemd[1]: Queued start job for default target initrd.target. Sep 4 23:59:14.368852 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:59:14.368862 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:59:14.368871 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 23:59:14.368880 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 23:59:14.368888 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 23:59:14.368897 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 23:59:14.368907 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 23:59:14.368917 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 23:59:14.368925 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:59:14.368934 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:59:14.368942 systemd[1]: Reached target paths.target - Path Units. Sep 4 23:59:14.368950 systemd[1]: Reached target slices.target - Slice Units. Sep 4 23:59:14.368958 systemd[1]: Reached target swap.target - Swaps. Sep 4 23:59:14.368967 systemd[1]: Reached target timers.target - Timer Units. Sep 4 23:59:14.368975 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 23:59:14.368983 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 23:59:14.368996 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 23:59:14.369005 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 4 23:59:14.369013 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:59:14.369021 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 23:59:14.369030 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:59:14.369038 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 23:59:14.369046 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 23:59:14.369055 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 23:59:14.369065 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 23:59:14.369073 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 23:59:14.369082 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 23:59:14.369090 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 23:59:14.369124 systemd-journald[218]: Collecting audit messages is disabled. Sep 4 23:59:14.369146 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:59:14.369155 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 23:59:14.369164 systemd-journald[218]: Journal started Sep 4 23:59:14.369183 systemd-journald[218]: Runtime Journal (/run/log/journal/c3122a86fbab46c4a2223192512a7ae1) is 8M, max 78.5M, 70.5M free. Sep 4 23:59:14.359468 systemd-modules-load[220]: Inserted module 'overlay' Sep 4 23:59:14.396822 kernel: Bridge firewalling registered Sep 4 23:59:14.396847 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 23:59:14.390258 systemd-modules-load[220]: Inserted module 'br_netfilter' Sep 4 23:59:14.410544 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 23:59:14.417781 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:59:14.438973 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 23:59:14.443669 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 23:59:14.453773 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:59:14.484942 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:59:14.493680 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:59:14.519861 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 23:59:14.536932 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 23:59:14.546205 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:59:14.569690 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:59:14.577598 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 23:59:14.604905 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 23:59:14.612717 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 23:59:14.627821 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:59:14.652754 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:59:14.668429 dracut-cmdline[252]: dracut-dracut-053 Sep 4 23:59:14.674814 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 23:59:14.691861 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=0304960b24e314f6095f7d8ad705a9bc0a9a4a34f7817da10ea634466a73d86e Sep 4 23:59:14.746620 systemd-resolved[264]: Positive Trust Anchors: Sep 4 23:59:14.747557 systemd-resolved[264]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 23:59:14.747590 systemd-resolved[264]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 23:59:14.749790 systemd-resolved[264]: Defaulting to hostname 'linux'. Sep 4 23:59:14.751255 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 23:59:14.757868 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:59:14.865542 kernel: SCSI subsystem initialized Sep 4 23:59:14.873532 kernel: Loading iSCSI transport class v2.0-870. Sep 4 23:59:14.884546 kernel: iscsi: registered transport (tcp) Sep 4 23:59:14.901756 kernel: iscsi: registered transport (qla4xxx) Sep 4 23:59:14.901793 kernel: QLogic iSCSI HBA Driver Sep 4 23:59:14.941212 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 23:59:14.955719 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 23:59:14.986534 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 23:59:14.986582 kernel: device-mapper: uevent: version 1.0.3 Sep 4 23:59:14.996955 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 23:59:15.045542 kernel: raid6: neonx8 gen() 15780 MB/s Sep 4 23:59:15.065523 kernel: raid6: neonx4 gen() 15801 MB/s Sep 4 23:59:15.086521 kernel: raid6: neonx2 gen() 13215 MB/s Sep 4 23:59:15.107522 kernel: raid6: neonx1 gen() 10432 MB/s Sep 4 23:59:15.127521 kernel: raid6: int64x8 gen() 6788 MB/s Sep 4 23:59:15.147521 kernel: raid6: int64x4 gen() 7362 MB/s Sep 4 23:59:15.168527 kernel: raid6: int64x2 gen() 6115 MB/s Sep 4 23:59:15.193142 kernel: raid6: int64x1 gen() 5061 MB/s Sep 4 23:59:15.193169 kernel: raid6: using algorithm neonx4 gen() 15801 MB/s Sep 4 23:59:15.217928 kernel: raid6: .... xor() 12500 MB/s, rmw enabled Sep 4 23:59:15.217942 kernel: raid6: using neon recovery algorithm Sep 4 23:59:15.231632 kernel: xor: measuring software checksum speed Sep 4 23:59:15.231647 kernel: 8regs : 21613 MB/sec Sep 4 23:59:15.236370 kernel: 32regs : 21613 MB/sec Sep 4 23:59:15.240381 kernel: arm64_neon : 28070 MB/sec Sep 4 23:59:15.245222 kernel: xor: using function: arm64_neon (28070 MB/sec) Sep 4 23:59:15.295531 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 23:59:15.305147 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 23:59:15.320666 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:59:15.345548 systemd-udevd[439]: Using default interface naming scheme 'v255'. Sep 4 23:59:15.351387 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:59:15.376711 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 23:59:15.390997 dracut-pre-trigger[451]: rd.md=0: removing MD RAID activation Sep 4 23:59:15.414604 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 23:59:15.434020 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 23:59:15.473835 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:59:15.492754 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 23:59:15.518663 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 23:59:15.533213 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 23:59:15.547374 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:59:15.554097 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 23:59:15.571705 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 23:59:15.612163 kernel: hv_vmbus: Vmbus version:5.3 Sep 4 23:59:15.612188 kernel: hv_vmbus: registering driver hid_hyperv Sep 4 23:59:15.608966 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 23:59:15.633976 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 4 23:59:15.634000 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Sep 4 23:59:15.609090 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:59:15.677144 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 4 23:59:15.677311 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Sep 4 23:59:15.677323 kernel: hv_vmbus: registering driver hv_storvsc Sep 4 23:59:15.668530 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:59:15.698524 kernel: hv_vmbus: registering driver hv_netvsc Sep 4 23:59:15.698549 kernel: scsi host0: storvsc_host_t Sep 4 23:59:15.711888 kernel: scsi host1: storvsc_host_t Sep 4 23:59:15.730619 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 4 23:59:15.730645 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 4 23:59:15.689253 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:59:15.749865 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 4 23:59:15.702134 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:59:15.773003 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 4 23:59:15.712018 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:59:15.743757 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:59:15.798499 kernel: PTP clock support registered Sep 4 23:59:15.756746 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 23:59:15.781675 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:59:15.830839 kernel: hv_utils: Registering HyperV Utility Driver Sep 4 23:59:15.830864 kernel: hv_netvsc 00224879-a22a-0022-4879-a22a00224879 eth0: VF slot 1 added Sep 4 23:59:15.831029 kernel: hv_vmbus: registering driver hv_utils Sep 4 23:59:15.815001 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:59:15.815192 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:59:16.346772 kernel: hv_utils: Shutdown IC version 3.2 Sep 4 23:59:16.346793 kernel: hv_utils: Heartbeat IC version 3.0 Sep 4 23:59:16.346803 kernel: hv_utils: TimeSync IC version 4.0 Sep 4 23:59:15.861056 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:59:16.345041 systemd-resolved[264]: Clock change detected. Flushing caches. Sep 4 23:59:16.415417 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 4 23:59:16.415593 kernel: hv_vmbus: registering driver hv_pci Sep 4 23:59:16.415605 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 4 23:59:16.415615 kernel: hv_pci abb7ab42-6d3c-4b3c-b0c4-72a10f611c47: PCI VMBus probing: Using version 0x10004 Sep 4 23:59:16.415716 kernel: hv_pci abb7ab42-6d3c-4b3c-b0c4-72a10f611c47: PCI host bridge to bus 6d3c:00 Sep 4 23:59:16.415793 kernel: pci_bus 6d3c:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Sep 4 23:59:16.415888 kernel: pci_bus 6d3c:00: No busn resource found for root bus, will use [bus 00-ff] Sep 4 23:59:16.415966 kernel: pci 6d3c:00:02.0: [15b3:1018] type 00 class 0x020000 Sep 4 23:59:16.415989 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 4 23:59:16.423654 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:59:16.455418 kernel: pci 6d3c:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 4 23:59:16.455476 kernel: pci 6d3c:00:02.0: enabling Extended Tags Sep 4 23:59:16.441650 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:59:16.515864 kernel: pci 6d3c:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 6d3c:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Sep 4 23:59:16.516106 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 4 23:59:16.516228 kernel: pci_bus 6d3c:00: busn_res: [bus 00-ff] end is updated to 00 Sep 4 23:59:16.516321 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 4 23:59:16.516412 kernel: pci 6d3c:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 4 23:59:16.516503 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 4 23:59:16.516589 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 4 23:59:16.516673 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 4 23:59:16.518469 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:59:16.544033 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 23:59:16.549061 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 4 23:59:16.570717 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:59:16.608106 kernel: mlx5_core 6d3c:00:02.0: enabling device (0000 -> 0002) Sep 4 23:59:16.618040 kernel: mlx5_core 6d3c:00:02.0: firmware version: 16.30.1284 Sep 4 23:59:16.817713 kernel: hv_netvsc 00224879-a22a-0022-4879-a22a00224879 eth0: VF registering: eth1 Sep 4 23:59:16.817914 kernel: mlx5_core 6d3c:00:02.0 eth1: joined to eth0 Sep 4 23:59:16.826108 kernel: mlx5_core 6d3c:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Sep 4 23:59:16.836035 kernel: mlx5_core 6d3c:00:02.0 enP27964s1: renamed from eth1 Sep 4 23:59:17.216913 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 4 23:59:17.241063 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (485) Sep 4 23:59:17.259903 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 4 23:59:17.280886 kernel: BTRFS: device fsid 74a5374f-334b-4c07-8952-82f9f0ad22ae devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (484) Sep 4 23:59:17.280578 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 4 23:59:17.300929 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 4 23:59:17.308668 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 4 23:59:17.341158 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 23:59:17.373050 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 23:59:18.392037 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 23:59:18.393725 disk-uuid[606]: The operation has completed successfully. Sep 4 23:59:18.473848 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 23:59:18.473937 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 23:59:18.516160 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 23:59:18.528998 sh[692]: Success Sep 4 23:59:18.557173 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 4 23:59:18.907321 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 23:59:18.930147 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 23:59:18.939929 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 23:59:18.981604 kernel: BTRFS info (device dm-0): first mount of filesystem 74a5374f-334b-4c07-8952-82f9f0ad22ae Sep 4 23:59:18.981656 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 4 23:59:18.988556 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 23:59:18.993619 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 23:59:18.997872 kernel: BTRFS info (device dm-0): using free space tree Sep 4 23:59:19.717961 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 23:59:19.723669 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 23:59:19.745297 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 23:59:19.753210 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 23:59:19.802266 kernel: BTRFS info (device sda6): first mount of filesystem 6280ecaa-ba8f-4e5e-8483-db3a07084cf9 Sep 4 23:59:19.802322 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 23:59:19.808458 kernel: BTRFS info (device sda6): using free space tree Sep 4 23:59:19.867055 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 23:59:19.870189 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 23:59:19.892978 kernel: BTRFS info (device sda6): last unmount of filesystem 6280ecaa-ba8f-4e5e-8483-db3a07084cf9 Sep 4 23:59:19.894249 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 23:59:19.907397 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 23:59:19.930596 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 23:59:19.943353 systemd-networkd[871]: lo: Link UP Sep 4 23:59:19.943366 systemd-networkd[871]: lo: Gained carrier Sep 4 23:59:19.945039 systemd-networkd[871]: Enumeration completed Sep 4 23:59:19.945590 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:59:19.945594 systemd-networkd[871]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 23:59:19.947695 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 23:59:19.962291 systemd[1]: Reached target network.target - Network. Sep 4 23:59:20.001073 kernel: mlx5_core 6d3c:00:02.0 enP27964s1: Link up Sep 4 23:59:20.044057 kernel: hv_netvsc 00224879-a22a-0022-4879-a22a00224879 eth0: Data path switched to VF: enP27964s1 Sep 4 23:59:20.044348 systemd-networkd[871]: enP27964s1: Link UP Sep 4 23:59:20.044589 systemd-networkd[871]: eth0: Link UP Sep 4 23:59:20.044941 systemd-networkd[871]: eth0: Gained carrier Sep 4 23:59:20.044951 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:59:20.070629 systemd-networkd[871]: enP27964s1: Gained carrier Sep 4 23:59:20.092074 systemd-networkd[871]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 4 23:59:21.251154 systemd-networkd[871]: eth0: Gained IPv6LL Sep 4 23:59:21.652824 ignition[874]: Ignition 2.20.0 Sep 4 23:59:21.652834 ignition[874]: Stage: fetch-offline Sep 4 23:59:21.657964 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 23:59:21.652869 ignition[874]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:59:21.652877 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 23:59:21.652965 ignition[874]: parsed url from cmdline: "" Sep 4 23:59:21.652968 ignition[874]: no config URL provided Sep 4 23:59:21.652972 ignition[874]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 23:59:21.652979 ignition[874]: no config at "/usr/lib/ignition/user.ign" Sep 4 23:59:21.689292 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 23:59:21.652983 ignition[874]: failed to fetch config: resource requires networking Sep 4 23:59:21.654227 ignition[874]: Ignition finished successfully Sep 4 23:59:21.712435 ignition[883]: Ignition 2.20.0 Sep 4 23:59:21.712441 ignition[883]: Stage: fetch Sep 4 23:59:21.712650 ignition[883]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:59:21.712660 ignition[883]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 23:59:21.712770 ignition[883]: parsed url from cmdline: "" Sep 4 23:59:21.712773 ignition[883]: no config URL provided Sep 4 23:59:21.712778 ignition[883]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 23:59:21.712785 ignition[883]: no config at "/usr/lib/ignition/user.ign" Sep 4 23:59:21.712812 ignition[883]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 4 23:59:21.832404 ignition[883]: GET result: OK Sep 4 23:59:21.832468 ignition[883]: config has been read from IMDS userdata Sep 4 23:59:21.832502 ignition[883]: parsing config with SHA512: 8c8c892a4bd302d394d47753076efec510d03b161ae8419bac72fdf8f8eba76348f4b7b610536529b8aed00889bff6116e7584b9e09b2750634fc7beeb274400 Sep 4 23:59:21.837003 unknown[883]: fetched base config from "system" Sep 4 23:59:21.837420 ignition[883]: fetch: fetch complete Sep 4 23:59:21.837011 unknown[883]: fetched base config from "system" Sep 4 23:59:21.837425 ignition[883]: fetch: fetch passed Sep 4 23:59:21.837036 unknown[883]: fetched user config from "azure" Sep 4 23:59:21.837474 ignition[883]: Ignition finished successfully Sep 4 23:59:21.841844 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 23:59:21.864729 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 23:59:21.892124 ignition[890]: Ignition 2.20.0 Sep 4 23:59:21.892132 ignition[890]: Stage: kargs Sep 4 23:59:21.897503 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 23:59:21.892305 ignition[890]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:59:21.892314 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 23:59:21.896337 ignition[890]: kargs: kargs passed Sep 4 23:59:21.896397 ignition[890]: Ignition finished successfully Sep 4 23:59:21.924229 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 23:59:21.947350 ignition[896]: Ignition 2.20.0 Sep 4 23:59:21.950613 ignition[896]: Stage: disks Sep 4 23:59:21.950821 ignition[896]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:59:21.950832 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 23:59:21.960354 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 23:59:21.951740 ignition[896]: disks: disks passed Sep 4 23:59:21.972554 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 23:59:21.951786 ignition[896]: Ignition finished successfully Sep 4 23:59:21.983208 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 23:59:21.993449 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 23:59:22.005697 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 23:59:22.015901 systemd[1]: Reached target basic.target - Basic System. Sep 4 23:59:22.044291 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 23:59:22.134179 systemd-fsck[904]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Sep 4 23:59:22.143243 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 23:59:22.160195 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 23:59:22.223049 kernel: EXT4-fs (sda9): mounted filesystem 22b06923-f972-4753-b92e-d6b25ef15ca3 r/w with ordered data mode. Quota mode: none. Sep 4 23:59:22.223679 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 23:59:22.228607 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 23:59:22.267105 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 23:59:22.290954 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 23:59:22.309651 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (915) Sep 4 23:59:22.309677 kernel: BTRFS info (device sda6): first mount of filesystem 6280ecaa-ba8f-4e5e-8483-db3a07084cf9 Sep 4 23:59:22.299230 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 4 23:59:22.340526 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 23:59:22.340551 kernel: BTRFS info (device sda6): using free space tree Sep 4 23:59:22.333723 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 23:59:22.333765 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 23:59:22.375200 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 23:59:22.355848 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 23:59:22.374870 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 23:59:22.396210 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 23:59:23.465577 coreos-metadata[917]: Sep 04 23:59:23.465 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 4 23:59:23.476886 coreos-metadata[917]: Sep 04 23:59:23.476 INFO Fetch successful Sep 4 23:59:23.476886 coreos-metadata[917]: Sep 04 23:59:23.476 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 4 23:59:23.501895 coreos-metadata[917]: Sep 04 23:59:23.500 INFO Fetch successful Sep 4 23:59:23.501895 coreos-metadata[917]: Sep 04 23:59:23.500 INFO wrote hostname ci-4230.2.2-n-ff4909b759 to /sysroot/etc/hostname Sep 4 23:59:23.503212 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 23:59:24.584324 initrd-setup-root[946]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 23:59:24.769805 initrd-setup-root[953]: cut: /sysroot/etc/group: No such file or directory Sep 4 23:59:24.791800 initrd-setup-root[960]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 23:59:24.800331 initrd-setup-root[967]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 23:59:26.481872 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 23:59:26.495227 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 23:59:26.510499 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 23:59:26.525137 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 23:59:26.538047 kernel: BTRFS info (device sda6): last unmount of filesystem 6280ecaa-ba8f-4e5e-8483-db3a07084cf9 Sep 4 23:59:26.563283 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 23:59:26.575455 ignition[1035]: INFO : Ignition 2.20.0 Sep 4 23:59:26.575455 ignition[1035]: INFO : Stage: mount Sep 4 23:59:26.575455 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:59:26.575455 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 23:59:26.602010 ignition[1035]: INFO : mount: mount passed Sep 4 23:59:26.602010 ignition[1035]: INFO : Ignition finished successfully Sep 4 23:59:26.583333 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 23:59:26.607212 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 23:59:26.627380 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 23:59:26.657115 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (1048) Sep 4 23:59:26.657157 kernel: BTRFS info (device sda6): first mount of filesystem 6280ecaa-ba8f-4e5e-8483-db3a07084cf9 Sep 4 23:59:26.670656 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 23:59:26.674909 kernel: BTRFS info (device sda6): using free space tree Sep 4 23:59:26.683030 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 23:59:26.684633 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 23:59:26.707515 ignition[1066]: INFO : Ignition 2.20.0 Sep 4 23:59:26.712132 ignition[1066]: INFO : Stage: files Sep 4 23:59:26.712132 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:59:26.712132 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 23:59:26.712132 ignition[1066]: DEBUG : files: compiled without relabeling support, skipping Sep 4 23:59:26.738719 ignition[1066]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 23:59:26.738719 ignition[1066]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 23:59:26.793830 ignition[1066]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 23:59:26.801135 ignition[1066]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 23:59:26.801135 ignition[1066]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 23:59:26.794315 unknown[1066]: wrote ssh authorized keys file for user: core Sep 4 23:59:26.835395 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 4 23:59:26.846146 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 4 23:59:26.884758 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 23:59:27.214288 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 4 23:59:27.214288 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 4 23:59:27.214288 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 23:59:27.214288 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 23:59:27.214288 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 23:59:27.214288 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 23:59:27.214288 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 23:59:27.214288 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 23:59:27.214288 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 23:59:27.305374 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 23:59:27.305374 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 23:59:27.305374 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 4 23:59:27.305374 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 4 23:59:27.305374 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 4 23:59:27.305374 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 4 23:59:27.771563 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 4 23:59:28.040728 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 4 23:59:28.040728 ignition[1066]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 4 23:59:28.122341 ignition[1066]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 23:59:28.135330 ignition[1066]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 23:59:28.135330 ignition[1066]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 4 23:59:28.135330 ignition[1066]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 4 23:59:28.135330 ignition[1066]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 23:59:28.135330 ignition[1066]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 23:59:28.135330 ignition[1066]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 23:59:28.135330 ignition[1066]: INFO : files: files passed Sep 4 23:59:28.135330 ignition[1066]: INFO : Ignition finished successfully Sep 4 23:59:28.134462 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 23:59:28.178334 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 23:59:28.195215 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 23:59:28.223301 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 23:59:28.272705 initrd-setup-root-after-ignition[1093]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:59:28.272705 initrd-setup-root-after-ignition[1093]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:59:28.223394 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 23:59:28.305610 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:59:28.232316 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 23:59:28.245887 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 23:59:28.275183 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 23:59:28.325560 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 23:59:28.325722 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 23:59:28.336669 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 23:59:28.348744 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 23:59:28.360615 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 23:59:28.380196 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 23:59:28.411255 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 23:59:28.428249 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 23:59:28.447310 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 23:59:28.447436 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 23:59:28.461794 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:59:28.474096 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:59:28.488072 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 23:59:28.500522 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 23:59:28.500607 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 23:59:28.518055 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 23:59:28.523722 systemd[1]: Stopped target basic.target - Basic System. Sep 4 23:59:28.535231 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 23:59:28.546840 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 23:59:28.558915 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 23:59:28.571119 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 23:59:28.582753 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 23:59:28.595417 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 23:59:28.606953 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 23:59:28.620403 systemd[1]: Stopped target swap.target - Swaps. Sep 4 23:59:28.630862 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 23:59:28.630956 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 23:59:28.646192 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:59:28.653418 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:59:28.666265 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 23:59:28.666313 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:59:28.679821 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 23:59:28.679912 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 23:59:28.697632 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 23:59:28.697689 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 23:59:28.705460 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 23:59:28.705513 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 23:59:28.791795 ignition[1119]: INFO : Ignition 2.20.0 Sep 4 23:59:28.791795 ignition[1119]: INFO : Stage: umount Sep 4 23:59:28.791795 ignition[1119]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:59:28.791795 ignition[1119]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 23:59:28.791795 ignition[1119]: INFO : umount: umount passed Sep 4 23:59:28.791795 ignition[1119]: INFO : Ignition finished successfully Sep 4 23:59:28.716973 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 4 23:59:28.717043 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 23:59:28.749200 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 23:59:28.762769 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 23:59:28.762848 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:59:28.773168 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 23:59:28.788131 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 23:59:28.788212 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:59:28.799984 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 23:59:28.800065 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 23:59:28.817602 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 23:59:28.817697 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 23:59:28.829426 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 23:59:28.829492 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 23:59:28.842437 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 23:59:28.842499 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 23:59:28.852976 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 23:59:28.853051 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 23:59:28.864550 systemd[1]: Stopped target network.target - Network. Sep 4 23:59:28.876445 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 23:59:28.876526 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 23:59:28.892241 systemd[1]: Stopped target paths.target - Path Units. Sep 4 23:59:28.902087 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 23:59:28.914531 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:59:28.922301 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 23:59:28.932423 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 23:59:28.943473 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 23:59:28.943533 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 23:59:28.956050 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 23:59:28.956101 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 23:59:28.967382 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 23:59:28.967458 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 23:59:28.979196 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 23:59:28.979243 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 23:59:28.990315 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 23:59:29.001301 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 23:59:29.013332 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 23:59:29.013884 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 23:59:29.269196 kernel: hv_netvsc 00224879-a22a-0022-4879-a22a00224879 eth0: Data path switched from VF: enP27964s1 Sep 4 23:59:29.013974 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 23:59:29.024532 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 23:59:29.024626 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 23:59:29.037491 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 4 23:59:29.037737 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 23:59:29.037847 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 23:59:29.054464 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 4 23:59:29.055797 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 23:59:29.055868 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:59:29.069251 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 23:59:29.069332 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 23:59:29.095414 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 23:59:29.105087 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 23:59:29.105178 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 23:59:29.118243 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 23:59:29.118297 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:59:29.136631 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 23:59:29.136677 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 23:59:29.143132 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 23:59:29.143174 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:59:29.159861 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:59:29.168679 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 4 23:59:29.168752 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:59:29.189435 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 23:59:29.190625 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:59:29.202098 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 23:59:29.202175 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 23:59:29.213994 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 23:59:29.214150 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:59:29.226179 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 23:59:29.226246 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 23:59:29.244422 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 23:59:29.244486 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 23:59:29.269314 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 23:59:29.269384 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:59:29.292256 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 23:59:29.309660 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 23:59:29.309744 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:59:29.327349 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:59:29.327408 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:59:29.341567 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 4 23:59:29.341632 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:59:29.342001 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 23:59:29.342132 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 23:59:29.383782 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 23:59:29.383912 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 23:59:29.397266 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 23:59:29.426269 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 23:59:29.458406 systemd[1]: Switching root. Sep 4 23:59:29.636392 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Sep 4 23:59:29.636442 systemd-journald[218]: Journal stopped Sep 4 23:59:39.112421 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 23:59:39.112445 kernel: SELinux: policy capability open_perms=1 Sep 4 23:59:39.112455 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 23:59:39.112463 kernel: SELinux: policy capability always_check_network=0 Sep 4 23:59:39.112473 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 23:59:39.112480 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 23:59:39.112489 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 23:59:39.112496 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 23:59:39.112504 kernel: audit: type=1403 audit(1757030371.765:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 23:59:39.112514 systemd[1]: Successfully loaded SELinux policy in 306.040ms. Sep 4 23:59:39.112525 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.961ms. Sep 4 23:59:39.112535 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 23:59:39.112544 systemd[1]: Detected virtualization microsoft. Sep 4 23:59:39.112552 systemd[1]: Detected architecture arm64. Sep 4 23:59:39.112562 systemd[1]: Detected first boot. Sep 4 23:59:39.112572 systemd[1]: Hostname set to . Sep 4 23:59:39.112581 systemd[1]: Initializing machine ID from random generator. Sep 4 23:59:39.112590 zram_generator::config[1162]: No configuration found. Sep 4 23:59:39.112600 kernel: NET: Registered PF_VSOCK protocol family Sep 4 23:59:39.112609 systemd[1]: Populated /etc with preset unit settings. Sep 4 23:59:39.112618 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 4 23:59:39.112627 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 23:59:39.112637 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 23:59:39.112646 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 23:59:39.112655 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 23:59:39.112665 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 23:59:39.112674 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 23:59:39.112682 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 23:59:39.112691 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 23:59:39.112702 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 23:59:39.112711 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 23:59:39.112720 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 23:59:39.112729 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:59:39.112738 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:59:39.112748 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 23:59:39.112757 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 23:59:39.112766 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 23:59:39.112777 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 23:59:39.112786 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 4 23:59:39.112796 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:59:39.112808 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 23:59:39.112817 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 23:59:39.112826 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 23:59:39.112835 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 23:59:39.112844 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:59:39.112855 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 23:59:39.112864 systemd[1]: Reached target slices.target - Slice Units. Sep 4 23:59:39.112873 systemd[1]: Reached target swap.target - Swaps. Sep 4 23:59:39.112882 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 23:59:39.112891 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 23:59:39.112900 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 4 23:59:39.112912 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:59:39.112921 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 23:59:39.112930 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:59:39.112939 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 23:59:39.112949 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 23:59:39.112958 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 23:59:39.112967 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 23:59:39.112978 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 23:59:39.112988 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 23:59:39.112998 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 23:59:39.113008 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 23:59:39.113028 systemd[1]: Reached target machines.target - Containers. Sep 4 23:59:39.113038 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 23:59:39.113047 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:59:39.113057 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 23:59:39.113068 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 23:59:39.113077 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:59:39.113087 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 23:59:39.113096 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:59:39.113105 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 23:59:39.113114 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:59:39.113124 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 23:59:39.113134 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 23:59:39.113145 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 23:59:39.113154 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 23:59:39.113163 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 23:59:39.113173 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:59:39.113182 kernel: fuse: init (API version 7.39) Sep 4 23:59:39.113191 kernel: loop: module loaded Sep 4 23:59:39.113199 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 23:59:39.113209 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 23:59:39.113219 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 23:59:39.113231 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 23:59:39.113240 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 4 23:59:39.113268 systemd-journald[1266]: Collecting audit messages is disabled. Sep 4 23:59:39.113288 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 23:59:39.113301 systemd-journald[1266]: Journal started Sep 4 23:59:39.113320 systemd-journald[1266]: Runtime Journal (/run/log/journal/5a3e774c992c4030a2152e99dd5ffe41) is 8M, max 78.5M, 70.5M free. Sep 4 23:59:38.012431 systemd[1]: Queued start job for default target multi-user.target. Sep 4 23:59:38.016783 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 4 23:59:38.017180 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 23:59:38.017538 systemd[1]: systemd-journald.service: Consumed 3.360s CPU time. Sep 4 23:59:39.137730 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 23:59:39.137814 systemd[1]: Stopped verity-setup.service. Sep 4 23:59:39.151173 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 23:59:39.151250 kernel: ACPI: bus type drm_connector registered Sep 4 23:59:39.157653 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 23:59:39.163396 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 23:59:39.169978 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 23:59:39.175430 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 23:59:39.181723 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 23:59:39.188618 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 23:59:39.194127 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 23:59:39.201492 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:59:39.209987 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 23:59:39.210175 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 23:59:39.219370 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:59:39.219552 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:59:39.226181 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 23:59:39.226346 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 23:59:39.233198 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:59:39.233353 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:59:39.245577 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 23:59:39.247087 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 23:59:39.253811 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:59:39.255062 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:59:39.261783 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 23:59:39.268395 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 23:59:39.275687 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 23:59:39.284054 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 4 23:59:39.296046 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:59:39.312007 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 23:59:39.322108 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 23:59:39.330186 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 23:59:39.337581 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 23:59:39.337622 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 23:59:39.344603 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 4 23:59:39.361169 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 23:59:39.368498 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 23:59:39.374294 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:59:39.375461 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 23:59:39.382715 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 23:59:39.389300 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 23:59:39.391228 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 23:59:39.397275 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 23:59:39.398469 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:59:39.406184 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 23:59:39.417100 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 23:59:39.425215 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 23:59:39.434165 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 23:59:39.442113 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 23:59:39.449035 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 23:59:39.461118 udevadm[1305]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 4 23:59:39.472769 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 23:59:39.481612 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 23:59:39.492160 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 4 23:59:39.517035 systemd-journald[1266]: Time spent on flushing to /var/log/journal/5a3e774c992c4030a2152e99dd5ffe41 is 12.141ms for 916 entries. Sep 4 23:59:39.517035 systemd-journald[1266]: System Journal (/var/log/journal/5a3e774c992c4030a2152e99dd5ffe41) is 8M, max 2.6G, 2.6G free. Sep 4 23:59:39.545134 systemd-journald[1266]: Received client request to flush runtime journal. Sep 4 23:59:39.545169 kernel: loop0: detected capacity change from 0 to 113512 Sep 4 23:59:39.548963 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 23:59:39.581815 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 23:59:39.582549 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 4 23:59:39.596529 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:59:40.052645 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 23:59:40.065207 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 23:59:40.363323 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Sep 4 23:59:40.363347 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Sep 4 23:59:40.367903 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:59:40.559042 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 23:59:40.633047 kernel: loop1: detected capacity change from 0 to 207008 Sep 4 23:59:40.711044 kernel: loop2: detected capacity change from 0 to 28720 Sep 4 23:59:41.782048 kernel: loop3: detected capacity change from 0 to 123192 Sep 4 23:59:42.056161 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 23:59:42.068232 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:59:42.095703 systemd-udevd[1328]: Using default interface naming scheme 'v255'. Sep 4 23:59:42.556037 kernel: loop4: detected capacity change from 0 to 113512 Sep 4 23:59:42.577054 kernel: loop5: detected capacity change from 0 to 207008 Sep 4 23:59:42.602048 kernel: loop6: detected capacity change from 0 to 28720 Sep 4 23:59:42.622051 kernel: loop7: detected capacity change from 0 to 123192 Sep 4 23:59:42.634527 (sd-merge)[1330]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Sep 4 23:59:42.634976 (sd-merge)[1330]: Merged extensions into '/usr'. Sep 4 23:59:42.638893 systemd[1]: Reload requested from client PID 1302 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 23:59:42.639060 systemd[1]: Reloading... Sep 4 23:59:42.706070 zram_generator::config[1357]: No configuration found. Sep 4 23:59:42.959479 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:59:42.983242 kernel: hv_vmbus: registering driver hv_balloon Sep 4 23:59:42.983358 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 4 23:59:42.998046 kernel: hv_balloon: Memory hot add disabled on ARM64 Sep 4 23:59:42.998137 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 23:59:43.024401 kernel: hv_vmbus: registering driver hyperv_fb Sep 4 23:59:43.048045 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 4 23:59:43.048131 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 4 23:59:43.058127 kernel: Console: switching to colour dummy device 80x25 Sep 4 23:59:43.061041 kernel: Console: switching to colour frame buffer device 128x48 Sep 4 23:59:43.069401 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 4 23:59:43.069761 systemd[1]: Reloading finished in 430 ms. Sep 4 23:59:43.086988 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:59:43.095326 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 23:59:43.131393 systemd[1]: Starting ensure-sysext.service... Sep 4 23:59:43.140537 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 23:59:43.149649 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 23:59:43.160372 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:59:43.185114 systemd[1]: Reload requested from client PID 1456 ('systemctl') (unit ensure-sysext.service)... Sep 4 23:59:43.185135 systemd[1]: Reloading... Sep 4 23:59:43.231543 systemd-tmpfiles[1458]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 23:59:43.231760 systemd-tmpfiles[1458]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 23:59:43.235276 systemd-tmpfiles[1458]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 23:59:43.235517 systemd-tmpfiles[1458]: ACLs are not supported, ignoring. Sep 4 23:59:43.235567 systemd-tmpfiles[1458]: ACLs are not supported, ignoring. Sep 4 23:59:43.269039 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1429) Sep 4 23:59:43.269154 zram_generator::config[1501]: No configuration found. Sep 4 23:59:43.344954 systemd-tmpfiles[1458]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 23:59:43.344968 systemd-tmpfiles[1458]: Skipping /boot Sep 4 23:59:43.359540 systemd-tmpfiles[1458]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 23:59:43.359560 systemd-tmpfiles[1458]: Skipping /boot Sep 4 23:59:43.428209 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:59:43.525114 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 4 23:59:43.534095 systemd[1]: Reloading finished in 348 ms. Sep 4 23:59:43.547595 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 23:59:43.566382 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:59:43.608313 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 23:59:43.641302 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 23:59:43.648706 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:59:43.650042 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 23:59:43.658785 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:59:43.668281 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:59:43.678663 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:59:43.686999 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:59:43.700310 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 23:59:43.708399 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:59:43.709801 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 23:59:43.720320 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 23:59:43.728009 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 23:59:43.736697 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 23:59:43.744618 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:59:43.744806 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:59:43.753108 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:59:43.753275 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:59:43.762765 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:59:43.762944 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:59:43.779113 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:59:43.783658 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:59:43.801369 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 23:59:43.811399 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:59:43.826242 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:59:43.832868 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:59:43.833250 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:59:43.833608 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 23:59:43.842183 lvm[1607]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 23:59:43.847208 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:59:43.847410 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:59:43.856081 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 23:59:43.856266 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 23:59:43.867132 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:59:43.867315 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:59:43.875213 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 23:59:43.882897 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:59:43.884099 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:59:43.892417 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 23:59:43.907989 systemd[1]: Finished ensure-sysext.service. Sep 4 23:59:43.914418 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 23:59:43.928099 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 23:59:43.945963 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:59:43.963176 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 23:59:43.970519 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 23:59:43.970595 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 23:59:43.971785 lvm[1655]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 23:59:43.991554 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 23:59:44.071918 augenrules[1660]: No rules Sep 4 23:59:44.073544 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 23:59:44.073816 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 23:59:44.139301 systemd-resolved[1619]: Positive Trust Anchors: Sep 4 23:59:44.139319 systemd-resolved[1619]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 23:59:44.139351 systemd-resolved[1619]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 23:59:44.160918 systemd-networkd[1457]: lo: Link UP Sep 4 23:59:44.160929 systemd-networkd[1457]: lo: Gained carrier Sep 4 23:59:44.162922 systemd-networkd[1457]: Enumeration completed Sep 4 23:59:44.163082 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 23:59:44.163261 systemd-networkd[1457]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:59:44.163264 systemd-networkd[1457]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 23:59:44.182212 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 4 23:59:44.191350 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 23:59:44.202083 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 23:59:44.212304 systemd-resolved[1619]: Using system hostname 'ci-4230.2.2-n-ff4909b759'. Sep 4 23:59:44.218039 kernel: mlx5_core 6d3c:00:02.0 enP27964s1: Link up Sep 4 23:59:44.250263 kernel: hv_netvsc 00224879-a22a-0022-4879-a22a00224879 eth0: Data path switched to VF: enP27964s1 Sep 4 23:59:44.249921 systemd-networkd[1457]: enP27964s1: Link UP Sep 4 23:59:44.250035 systemd-networkd[1457]: eth0: Link UP Sep 4 23:59:44.250038 systemd-networkd[1457]: eth0: Gained carrier Sep 4 23:59:44.250054 systemd-networkd[1457]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:59:44.251096 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 23:59:44.259945 systemd[1]: Reached target network.target - Network. Sep 4 23:59:44.265837 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:59:44.266354 systemd-networkd[1457]: enP27964s1: Gained carrier Sep 4 23:59:44.282085 systemd-networkd[1457]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 4 23:59:44.284495 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 4 23:59:44.716720 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:59:45.571139 systemd-networkd[1457]: eth0: Gained IPv6LL Sep 4 23:59:45.573604 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 23:59:45.582105 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 23:59:46.786673 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 23:59:46.797612 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 23:59:51.810709 ldconfig[1297]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 23:59:51.827158 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 23:59:51.841201 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 23:59:51.868604 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 23:59:51.875977 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 23:59:51.882579 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 23:59:51.889927 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 23:59:51.898722 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 23:59:51.905140 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 23:59:51.913856 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 23:59:51.921733 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 23:59:51.921783 systemd[1]: Reached target paths.target - Path Units. Sep 4 23:59:51.927635 systemd[1]: Reached target timers.target - Timer Units. Sep 4 23:59:51.948930 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 23:59:51.957474 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 23:59:51.966371 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 4 23:59:51.975011 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 4 23:59:51.982744 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 4 23:59:51.996883 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 23:59:52.003845 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 4 23:59:52.013091 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 23:59:52.019510 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 23:59:52.026237 systemd[1]: Reached target basic.target - Basic System. Sep 4 23:59:52.032281 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 23:59:52.032318 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 23:59:52.054201 systemd[1]: Starting chronyd.service - NTP client/server... Sep 4 23:59:52.067203 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 23:59:52.075200 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 23:59:52.087186 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 23:59:52.095184 (chronyd)[1681]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Sep 4 23:59:52.103721 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 23:59:52.111148 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 23:59:52.117442 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 23:59:52.117487 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Sep 4 23:59:52.120966 jq[1688]: false Sep 4 23:59:52.121240 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Sep 4 23:59:52.129181 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Sep 4 23:59:52.130521 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:59:52.140211 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 23:59:52.143606 KVP[1690]: KVP starting; pid is:1690 Sep 4 23:59:52.155411 KVP[1690]: KVP LIC Version: 3.1 Sep 4 23:59:52.156073 kernel: hv_utils: KVP IC version 4.0 Sep 4 23:59:52.157606 chronyd[1696]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Sep 4 23:59:52.160532 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 23:59:52.168686 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 23:59:52.177201 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 23:59:52.186486 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 23:59:52.198155 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 23:59:52.205445 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 23:59:52.214966 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 23:59:52.217960 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 23:59:52.229063 extend-filesystems[1689]: Found loop4 Sep 4 23:59:52.229063 extend-filesystems[1689]: Found loop5 Sep 4 23:59:52.229063 extend-filesystems[1689]: Found loop6 Sep 4 23:59:52.229063 extend-filesystems[1689]: Found loop7 Sep 4 23:59:52.229063 extend-filesystems[1689]: Found sda Sep 4 23:59:52.229063 extend-filesystems[1689]: Found sda1 Sep 4 23:59:52.229063 extend-filesystems[1689]: Found sda2 Sep 4 23:59:52.229063 extend-filesystems[1689]: Found sda3 Sep 4 23:59:52.229063 extend-filesystems[1689]: Found usr Sep 4 23:59:52.229063 extend-filesystems[1689]: Found sda4 Sep 4 23:59:52.229063 extend-filesystems[1689]: Found sda6 Sep 4 23:59:52.229063 extend-filesystems[1689]: Found sda7 Sep 4 23:59:52.229063 extend-filesystems[1689]: Found sda9 Sep 4 23:59:52.229063 extend-filesystems[1689]: Checking size of /dev/sda9 Sep 4 23:59:52.228997 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 23:59:52.247661 chronyd[1696]: Timezone right/UTC failed leap second check, ignoring Sep 4 23:59:52.250028 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 23:59:52.402663 update_engine[1707]: I20250904 23:59:52.324433 1707 main.cc:92] Flatcar Update Engine starting Sep 4 23:59:52.251176 chronyd[1696]: Loaded seccomp filter (level 2) Sep 4 23:59:52.250231 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 23:59:52.406661 jq[1710]: true Sep 4 23:59:52.408605 extend-filesystems[1689]: Old size kept for /dev/sda9 Sep 4 23:59:52.408605 extend-filesystems[1689]: Found sr0 Sep 4 23:59:52.252352 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 23:59:52.252531 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 23:59:52.452004 tar[1717]: linux-arm64/LICENSE Sep 4 23:59:52.452004 tar[1717]: linux-arm64/helm Sep 4 23:59:52.278429 systemd[1]: Started chronyd.service - NTP client/server. Sep 4 23:59:52.286556 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 23:59:52.456171 jq[1722]: true Sep 4 23:59:52.289118 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 23:59:52.301988 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 23:59:52.346756 (ntainerd)[1723]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 23:59:52.350419 systemd-logind[1703]: New seat seat0. Sep 4 23:59:52.359738 systemd-logind[1703]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Sep 4 23:59:52.359966 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 23:59:52.413559 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 23:59:52.416132 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 23:59:52.514749 bash[1754]: Updated "/home/core/.ssh/authorized_keys" Sep 4 23:59:52.507242 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 23:59:52.520232 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 23:59:52.533731 dbus-daemon[1686]: [system] SELinux support is enabled Sep 4 23:59:52.533942 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 23:59:52.555193 update_engine[1707]: I20250904 23:59:52.544297 1707 update_check_scheduler.cc:74] Next update check in 10m52s Sep 4 23:59:52.548830 dbus-daemon[1686]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 4 23:59:52.547485 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 23:59:52.547508 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 23:59:52.559419 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 23:59:52.559444 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 23:59:52.568120 systemd[1]: Started update-engine.service - Update Engine. Sep 4 23:59:52.589387 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 23:59:52.616041 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1755) Sep 4 23:59:52.700103 coreos-metadata[1683]: Sep 04 23:59:52.699 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 4 23:59:52.705187 coreos-metadata[1683]: Sep 04 23:59:52.705 INFO Fetch successful Sep 4 23:59:52.705393 coreos-metadata[1683]: Sep 04 23:59:52.705 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 4 23:59:52.712474 coreos-metadata[1683]: Sep 04 23:59:52.712 INFO Fetch successful Sep 4 23:59:52.712641 coreos-metadata[1683]: Sep 04 23:59:52.712 INFO Fetching http://168.63.129.16/machine/e623cd16-4e6e-4b53-b132-5a3bbe7fc152/1a053b7a%2Dd59c%2D43cb%2D8f47%2Dcfb375e226f2.%5Fci%2D4230.2.2%2Dn%2Dff4909b759?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 4 23:59:52.715784 coreos-metadata[1683]: Sep 04 23:59:52.715 INFO Fetch successful Sep 4 23:59:52.715784 coreos-metadata[1683]: Sep 04 23:59:52.715 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 4 23:59:52.732085 coreos-metadata[1683]: Sep 04 23:59:52.732 INFO Fetch successful Sep 4 23:59:52.802545 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 23:59:52.818660 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 23:59:53.017893 locksmithd[1774]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 23:59:53.222919 containerd[1723]: time="2025-09-04T23:59:53.222821600Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 4 23:59:53.263214 tar[1717]: linux-arm64/README.md Sep 4 23:59:53.282782 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 23:59:53.292158 containerd[1723]: time="2025-09-04T23:59:53.291917840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:59:53.300636 containerd[1723]: time="2025-09-04T23:59:53.298773040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:59:53.300636 containerd[1723]: time="2025-09-04T23:59:53.298814480Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 23:59:53.300636 containerd[1723]: time="2025-09-04T23:59:53.298832400Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 23:59:53.300636 containerd[1723]: time="2025-09-04T23:59:53.298993040Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 23:59:53.300636 containerd[1723]: time="2025-09-04T23:59:53.299009920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 23:59:53.300636 containerd[1723]: time="2025-09-04T23:59:53.299111440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:59:53.300636 containerd[1723]: time="2025-09-04T23:59:53.299124880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:59:53.300636 containerd[1723]: time="2025-09-04T23:59:53.299327040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:59:53.300636 containerd[1723]: time="2025-09-04T23:59:53.299342320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 23:59:53.300636 containerd[1723]: time="2025-09-04T23:59:53.299357120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:59:53.300636 containerd[1723]: time="2025-09-04T23:59:53.299366360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 23:59:53.300903 containerd[1723]: time="2025-09-04T23:59:53.299433200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:59:53.300903 containerd[1723]: time="2025-09-04T23:59:53.299620760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:59:53.300903 containerd[1723]: time="2025-09-04T23:59:53.299742840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:59:53.300903 containerd[1723]: time="2025-09-04T23:59:53.299756520Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 23:59:53.300903 containerd[1723]: time="2025-09-04T23:59:53.299828240Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 23:59:53.300903 containerd[1723]: time="2025-09-04T23:59:53.299870480Z" level=info msg="metadata content store policy set" policy=shared Sep 4 23:59:53.328389 containerd[1723]: time="2025-09-04T23:59:53.326691320Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 23:59:53.328389 containerd[1723]: time="2025-09-04T23:59:53.326782120Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 23:59:53.328389 containerd[1723]: time="2025-09-04T23:59:53.326801600Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 23:59:53.328389 containerd[1723]: time="2025-09-04T23:59:53.326818720Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 23:59:53.328389 containerd[1723]: time="2025-09-04T23:59:53.326837240Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 23:59:53.328389 containerd[1723]: time="2025-09-04T23:59:53.327000080Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 23:59:53.328389 containerd[1723]: time="2025-09-04T23:59:53.327309320Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 23:59:53.328389 containerd[1723]: time="2025-09-04T23:59:53.327418640Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 23:59:53.328389 containerd[1723]: time="2025-09-04T23:59:53.327434200Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 23:59:53.328389 containerd[1723]: time="2025-09-04T23:59:53.327448920Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 23:59:53.328389 containerd[1723]: time="2025-09-04T23:59:53.327462400Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 23:59:53.328389 containerd[1723]: time="2025-09-04T23:59:53.327477200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 23:59:53.328389 containerd[1723]: time="2025-09-04T23:59:53.327491000Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 23:59:53.328389 containerd[1723]: time="2025-09-04T23:59:53.327521560Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 23:59:53.328749 containerd[1723]: time="2025-09-04T23:59:53.327537200Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 23:59:53.328749 containerd[1723]: time="2025-09-04T23:59:53.327551280Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 23:59:53.328749 containerd[1723]: time="2025-09-04T23:59:53.327563920Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 23:59:53.328749 containerd[1723]: time="2025-09-04T23:59:53.327575760Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 23:59:53.328749 containerd[1723]: time="2025-09-04T23:59:53.327594720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 23:59:53.328749 containerd[1723]: time="2025-09-04T23:59:53.327607600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 23:59:53.328749 containerd[1723]: time="2025-09-04T23:59:53.327619120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 23:59:53.328749 containerd[1723]: time="2025-09-04T23:59:53.327633760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 23:59:53.328749 containerd[1723]: time="2025-09-04T23:59:53.327644960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 23:59:53.328749 containerd[1723]: time="2025-09-04T23:59:53.327658120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 23:59:53.328749 containerd[1723]: time="2025-09-04T23:59:53.327669080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 23:59:53.328749 containerd[1723]: time="2025-09-04T23:59:53.327684080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 23:59:53.328749 containerd[1723]: time="2025-09-04T23:59:53.327697000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 23:59:53.328749 containerd[1723]: time="2025-09-04T23:59:53.327712000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 23:59:53.328988 containerd[1723]: time="2025-09-04T23:59:53.327723600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 23:59:53.328988 containerd[1723]: time="2025-09-04T23:59:53.327734920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 23:59:53.328988 containerd[1723]: time="2025-09-04T23:59:53.327746600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 23:59:53.328988 containerd[1723]: time="2025-09-04T23:59:53.327766200Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 23:59:53.328988 containerd[1723]: time="2025-09-04T23:59:53.327786400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 23:59:53.328988 containerd[1723]: time="2025-09-04T23:59:53.327801200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 23:59:53.328988 containerd[1723]: time="2025-09-04T23:59:53.327811720Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 23:59:53.328988 containerd[1723]: time="2025-09-04T23:59:53.327863680Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 23:59:53.328988 containerd[1723]: time="2025-09-04T23:59:53.327880800Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 23:59:53.328988 containerd[1723]: time="2025-09-04T23:59:53.327890920Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 23:59:53.328988 containerd[1723]: time="2025-09-04T23:59:53.327903320Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 23:59:53.328988 containerd[1723]: time="2025-09-04T23:59:53.327912040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 23:59:53.328988 containerd[1723]: time="2025-09-04T23:59:53.327924120Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 23:59:53.328988 containerd[1723]: time="2025-09-04T23:59:53.327933600Z" level=info msg="NRI interface is disabled by configuration." Sep 4 23:59:53.329231 containerd[1723]: time="2025-09-04T23:59:53.327943560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 23:59:53.329252 containerd[1723]: time="2025-09-04T23:59:53.328256480Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 23:59:53.329252 containerd[1723]: time="2025-09-04T23:59:53.328303000Z" level=info msg="Connect containerd service" Sep 4 23:59:53.329252 containerd[1723]: time="2025-09-04T23:59:53.328342040Z" level=info msg="using legacy CRI server" Sep 4 23:59:53.329252 containerd[1723]: time="2025-09-04T23:59:53.328349320Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 23:59:53.330366 containerd[1723]: time="2025-09-04T23:59:53.330341280Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 23:59:53.333138 containerd[1723]: time="2025-09-04T23:59:53.333108160Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 23:59:53.335624 containerd[1723]: time="2025-09-04T23:59:53.335587520Z" level=info msg="Start subscribing containerd event" Sep 4 23:59:53.336723 containerd[1723]: time="2025-09-04T23:59:53.336424840Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 23:59:53.336798 containerd[1723]: time="2025-09-04T23:59:53.336777760Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 23:59:53.339197 containerd[1723]: time="2025-09-04T23:59:53.339169160Z" level=info msg="Start recovering state" Sep 4 23:59:53.339273 containerd[1723]: time="2025-09-04T23:59:53.339254280Z" level=info msg="Start event monitor" Sep 4 23:59:53.339273 containerd[1723]: time="2025-09-04T23:59:53.339271800Z" level=info msg="Start snapshots syncer" Sep 4 23:59:53.339332 containerd[1723]: time="2025-09-04T23:59:53.339283640Z" level=info msg="Start cni network conf syncer for default" Sep 4 23:59:53.339332 containerd[1723]: time="2025-09-04T23:59:53.339292720Z" level=info msg="Start streaming server" Sep 4 23:59:53.345783 containerd[1723]: time="2025-09-04T23:59:53.339363760Z" level=info msg="containerd successfully booted in 0.121767s" Sep 4 23:59:53.339476 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 23:59:53.470769 sshd_keygen[1718]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 23:59:53.499945 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 23:59:53.514838 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 23:59:53.523277 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Sep 4 23:59:53.529917 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 23:59:53.532047 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 23:59:53.551375 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 23:59:53.568212 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Sep 4 23:59:53.581913 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 23:59:53.594311 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 23:59:53.605341 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 4 23:59:53.613542 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 23:59:53.721089 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:59:53.728860 (kubelet)[1868]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:59:53.728885 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 23:59:53.736243 systemd[1]: Startup finished in 696ms (kernel) + 17.223s (initrd) + 22.276s (userspace) = 40.196s. Sep 4 23:59:54.276764 kubelet[1868]: E0904 23:59:54.276691 1868 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:59:54.279131 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:59:54.279278 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:59:54.279592 systemd[1]: kubelet.service: Consumed 737ms CPU time, 256.9M memory peak. Sep 4 23:59:54.558998 login[1860]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Sep 4 23:59:54.580744 login[1862]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:59:54.586607 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 23:59:54.599308 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 23:59:54.604593 systemd-logind[1703]: New session 1 of user core. Sep 4 23:59:54.637735 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 23:59:54.646280 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 23:59:54.671930 (systemd)[1881]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 23:59:54.674304 systemd-logind[1703]: New session c1 of user core. Sep 4 23:59:55.057085 systemd[1881]: Queued start job for default target default.target. Sep 4 23:59:55.068937 systemd[1881]: Created slice app.slice - User Application Slice. Sep 4 23:59:55.068970 systemd[1881]: Reached target paths.target - Paths. Sep 4 23:59:55.069010 systemd[1881]: Reached target timers.target - Timers. Sep 4 23:59:55.070257 systemd[1881]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 23:59:55.080131 systemd[1881]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 23:59:55.080198 systemd[1881]: Reached target sockets.target - Sockets. Sep 4 23:59:55.080245 systemd[1881]: Reached target basic.target - Basic System. Sep 4 23:59:55.080274 systemd[1881]: Reached target default.target - Main User Target. Sep 4 23:59:55.080298 systemd[1881]: Startup finished in 397ms. Sep 4 23:59:55.080430 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 23:59:55.087202 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 23:59:55.559381 login[1860]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:59:55.563328 systemd-logind[1703]: New session 2 of user core. Sep 4 23:59:55.574189 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 23:59:57.016898 waagent[1857]: 2025-09-04T23:59:57.016746Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Sep 4 23:59:57.023048 waagent[1857]: 2025-09-04T23:59:57.022959Z INFO Daemon Daemon OS: flatcar 4230.2.2 Sep 4 23:59:57.027851 waagent[1857]: 2025-09-04T23:59:57.027783Z INFO Daemon Daemon Python: 3.11.11 Sep 4 23:59:57.036761 waagent[1857]: 2025-09-04T23:59:57.032715Z INFO Daemon Daemon Run daemon Sep 4 23:59:57.037117 waagent[1857]: 2025-09-04T23:59:57.037038Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4230.2.2' Sep 4 23:59:57.046441 waagent[1857]: 2025-09-04T23:59:57.046355Z INFO Daemon Daemon Using waagent for provisioning Sep 4 23:59:57.052501 waagent[1857]: 2025-09-04T23:59:57.052436Z INFO Daemon Daemon Activate resource disk Sep 4 23:59:57.057655 waagent[1857]: 2025-09-04T23:59:57.057596Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 4 23:59:57.072191 waagent[1857]: 2025-09-04T23:59:57.072119Z INFO Daemon Daemon Found device: None Sep 4 23:59:57.077069 waagent[1857]: 2025-09-04T23:59:57.076997Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 4 23:59:57.086738 waagent[1857]: 2025-09-04T23:59:57.086671Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 4 23:59:57.099625 waagent[1857]: 2025-09-04T23:59:57.099565Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 4 23:59:57.106556 waagent[1857]: 2025-09-04T23:59:57.106488Z INFO Daemon Daemon Running default provisioning handler Sep 4 23:59:57.118859 waagent[1857]: 2025-09-04T23:59:57.118775Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Sep 4 23:59:57.133963 waagent[1857]: 2025-09-04T23:59:57.133888Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 4 23:59:57.145109 waagent[1857]: 2025-09-04T23:59:57.145034Z INFO Daemon Daemon cloud-init is enabled: False Sep 4 23:59:57.151165 waagent[1857]: 2025-09-04T23:59:57.151106Z INFO Daemon Daemon Copying ovf-env.xml Sep 4 23:59:57.367494 waagent[1857]: 2025-09-04T23:59:57.367181Z INFO Daemon Daemon Successfully mounted dvd Sep 4 23:59:57.393984 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 4 23:59:57.396418 waagent[1857]: 2025-09-04T23:59:57.396335Z INFO Daemon Daemon Detect protocol endpoint Sep 4 23:59:57.401730 waagent[1857]: 2025-09-04T23:59:57.401646Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 4 23:59:57.408193 waagent[1857]: 2025-09-04T23:59:57.408124Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 4 23:59:57.415340 waagent[1857]: 2025-09-04T23:59:57.415277Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 4 23:59:57.421321 waagent[1857]: 2025-09-04T23:59:57.421262Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 4 23:59:57.426962 waagent[1857]: 2025-09-04T23:59:57.426876Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 4 23:59:57.563272 waagent[1857]: 2025-09-04T23:59:57.563211Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 4 23:59:57.572058 waagent[1857]: 2025-09-04T23:59:57.571735Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 4 23:59:57.578452 waagent[1857]: 2025-09-04T23:59:57.578271Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 4 23:59:57.986064 waagent[1857]: 2025-09-04T23:59:57.985252Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 4 23:59:57.992444 waagent[1857]: 2025-09-04T23:59:57.992364Z INFO Daemon Daemon Forcing an update of the goal state. Sep 4 23:59:58.002865 waagent[1857]: 2025-09-04T23:59:58.002804Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 4 23:59:58.077971 waagent[1857]: 2025-09-04T23:59:58.077895Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Sep 4 23:59:58.083873 waagent[1857]: 2025-09-04T23:59:58.083811Z INFO Daemon Sep 4 23:59:58.086824 waagent[1857]: 2025-09-04T23:59:58.086774Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: c0b8577b-671e-42dd-9e5f-d5c6c995dd9b eTag: 3755112758201535392 source: Fabric] Sep 4 23:59:58.100319 waagent[1857]: 2025-09-04T23:59:58.100265Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Sep 4 23:59:58.108489 waagent[1857]: 2025-09-04T23:59:58.108428Z INFO Daemon Sep 4 23:59:58.111829 waagent[1857]: 2025-09-04T23:59:58.111774Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Sep 4 23:59:58.123532 waagent[1857]: 2025-09-04T23:59:58.123494Z INFO Daemon Daemon Downloading artifacts profile blob Sep 4 23:59:58.203695 waagent[1857]: 2025-09-04T23:59:58.203596Z INFO Daemon Downloaded certificate {'thumbprint': '26074E914A02A5F72DBC2D203A691C0BC35898AD', 'hasPrivateKey': True} Sep 4 23:59:58.215812 waagent[1857]: 2025-09-04T23:59:58.215743Z INFO Daemon Fetch goal state completed Sep 4 23:59:58.227337 waagent[1857]: 2025-09-04T23:59:58.227282Z INFO Daemon Daemon Starting provisioning Sep 4 23:59:58.232621 waagent[1857]: 2025-09-04T23:59:58.232562Z INFO Daemon Daemon Handle ovf-env.xml. Sep 4 23:59:58.237519 waagent[1857]: 2025-09-04T23:59:58.237420Z INFO Daemon Daemon Set hostname [ci-4230.2.2-n-ff4909b759] Sep 4 23:59:58.264045 waagent[1857]: 2025-09-04T23:59:58.263526Z INFO Daemon Daemon Publish hostname [ci-4230.2.2-n-ff4909b759] Sep 4 23:59:58.271024 waagent[1857]: 2025-09-04T23:59:58.270939Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 4 23:59:58.277524 waagent[1857]: 2025-09-04T23:59:58.277465Z INFO Daemon Daemon Primary interface is [eth0] Sep 4 23:59:58.290681 systemd-networkd[1457]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:59:58.291422 systemd-networkd[1457]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 23:59:58.291456 systemd-networkd[1457]: eth0: DHCP lease lost Sep 4 23:59:58.291855 waagent[1857]: 2025-09-04T23:59:58.291777Z INFO Daemon Daemon Create user account if not exists Sep 4 23:59:58.299652 waagent[1857]: 2025-09-04T23:59:58.299555Z INFO Daemon Daemon User core already exists, skip useradd Sep 4 23:59:58.311044 waagent[1857]: 2025-09-04T23:59:58.306094Z INFO Daemon Daemon Configure sudoer Sep 4 23:59:58.311770 waagent[1857]: 2025-09-04T23:59:58.311696Z INFO Daemon Daemon Configure sshd Sep 4 23:59:58.316576 waagent[1857]: 2025-09-04T23:59:58.316514Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Sep 4 23:59:58.336075 waagent[1857]: 2025-09-04T23:59:58.331012Z INFO Daemon Daemon Deploy ssh public key. Sep 4 23:59:58.345092 systemd-networkd[1457]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 4 23:59:59.479496 waagent[1857]: 2025-09-04T23:59:59.479440Z INFO Daemon Daemon Provisioning complete Sep 4 23:59:59.496321 waagent[1857]: 2025-09-04T23:59:59.496266Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 4 23:59:59.503557 waagent[1857]: 2025-09-04T23:59:59.503504Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 4 23:59:59.514770 waagent[1857]: 2025-09-04T23:59:59.514696Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Sep 4 23:59:59.645837 waagent[1931]: 2025-09-04T23:59:59.645304Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Sep 4 23:59:59.645837 waagent[1931]: 2025-09-04T23:59:59.645458Z INFO ExtHandler ExtHandler OS: flatcar 4230.2.2 Sep 4 23:59:59.645837 waagent[1931]: 2025-09-04T23:59:59.645512Z INFO ExtHandler ExtHandler Python: 3.11.11 Sep 4 23:59:59.786056 waagent[1931]: 2025-09-04T23:59:59.785547Z INFO ExtHandler ExtHandler Distro: flatcar-4230.2.2; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 4 23:59:59.786056 waagent[1931]: 2025-09-04T23:59:59.785796Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 4 23:59:59.786056 waagent[1931]: 2025-09-04T23:59:59.785857Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 4 23:59:59.793985 waagent[1931]: 2025-09-04T23:59:59.793913Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 4 23:59:59.800228 waagent[1931]: 2025-09-04T23:59:59.800181Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 4 23:59:59.800745 waagent[1931]: 2025-09-04T23:59:59.800697Z INFO ExtHandler Sep 4 23:59:59.800818 waagent[1931]: 2025-09-04T23:59:59.800785Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: f85bc345-2ed1-4935-9b5d-412412972a61 eTag: 3755112758201535392 source: Fabric] Sep 4 23:59:59.801134 waagent[1931]: 2025-09-04T23:59:59.801089Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 4 23:59:59.801722 waagent[1931]: 2025-09-04T23:59:59.801672Z INFO ExtHandler Sep 4 23:59:59.801786 waagent[1931]: 2025-09-04T23:59:59.801756Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 4 23:59:59.805943 waagent[1931]: 2025-09-04T23:59:59.805903Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 4 23:59:59.887075 waagent[1931]: 2025-09-04T23:59:59.886872Z INFO ExtHandler Downloaded certificate {'thumbprint': '26074E914A02A5F72DBC2D203A691C0BC35898AD', 'hasPrivateKey': True} Sep 4 23:59:59.887523 waagent[1931]: 2025-09-04T23:59:59.887472Z INFO ExtHandler Fetch goal state completed Sep 4 23:59:59.902956 waagent[1931]: 2025-09-04T23:59:59.902892Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1931 Sep 4 23:59:59.903142 waagent[1931]: 2025-09-04T23:59:59.903100Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Sep 4 23:59:59.904770 waagent[1931]: 2025-09-04T23:59:59.904721Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4230.2.2', '', 'Flatcar Container Linux by Kinvolk'] Sep 4 23:59:59.905166 waagent[1931]: 2025-09-04T23:59:59.905123Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 5 00:00:00.018218 waagent[1931]: 2025-09-05T00:00:00.018168Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 5 00:00:00.018421 waagent[1931]: 2025-09-05T00:00:00.018380Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 5 00:00:00.033340 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. Sep 5 00:00:00.035985 waagent[1931]: 2025-09-05T00:00:00.035445Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 5 00:00:00.041827 systemd[1]: Reload requested from client PID 1945 ('systemctl') (unit waagent.service)... Sep 5 00:00:00.041840 systemd[1]: Reloading... Sep 5 00:00:00.150059 zram_generator::config[1989]: No configuration found. Sep 5 00:00:00.252795 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:00:00.353648 systemd[1]: Reloading finished in 311 ms. Sep 5 00:00:00.381574 waagent[1931]: 2025-09-05T00:00:00.381132Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Sep 5 00:00:00.381671 systemd[1]: logrotate.service: Deactivated successfully. Sep 5 00:00:00.388352 systemd[1]: Reload requested from client PID 2040 ('systemctl') (unit waagent.service)... Sep 5 00:00:00.388366 systemd[1]: Reloading... Sep 5 00:00:00.481188 zram_generator::config[2080]: No configuration found. Sep 5 00:00:00.594946 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:00:00.693882 systemd[1]: Reloading finished in 305 ms. Sep 5 00:00:00.716033 waagent[1931]: 2025-09-05T00:00:00.712829Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Sep 5 00:00:00.716033 waagent[1931]: 2025-09-05T00:00:00.712996Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Sep 5 00:00:01.465054 waagent[1931]: 2025-09-05T00:00:01.464465Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Sep 5 00:00:01.465201 waagent[1931]: 2025-09-05T00:00:01.465126Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Sep 5 00:00:01.466095 waagent[1931]: 2025-09-05T00:00:01.465994Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 5 00:00:01.466301 waagent[1931]: 2025-09-05T00:00:01.466159Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 5 00:00:01.466399 waagent[1931]: 2025-09-05T00:00:01.466359Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 5 00:00:01.466640 waagent[1931]: 2025-09-05T00:00:01.466594Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 5 00:00:01.467075 waagent[1931]: 2025-09-05T00:00:01.467002Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 5 00:00:01.467509 waagent[1931]: 2025-09-05T00:00:01.467450Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 5 00:00:01.467734 waagent[1931]: 2025-09-05T00:00:01.467689Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 5 00:00:01.467734 waagent[1931]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 5 00:00:01.467734 waagent[1931]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Sep 5 00:00:01.467734 waagent[1931]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 5 00:00:01.467734 waagent[1931]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 5 00:00:01.467734 waagent[1931]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 5 00:00:01.467734 waagent[1931]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 5 00:00:01.468153 waagent[1931]: 2025-09-05T00:00:01.468084Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 5 00:00:01.468425 waagent[1931]: 2025-09-05T00:00:01.468221Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 5 00:00:01.468425 waagent[1931]: 2025-09-05T00:00:01.468307Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 5 00:00:01.468491 waagent[1931]: 2025-09-05T00:00:01.468442Z INFO EnvHandler ExtHandler Configure routes Sep 5 00:00:01.468537 waagent[1931]: 2025-09-05T00:00:01.468502Z INFO EnvHandler ExtHandler Gateway:None Sep 5 00:00:01.469067 waagent[1931]: 2025-09-05T00:00:01.468978Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 5 00:00:01.469208 waagent[1931]: 2025-09-05T00:00:01.469090Z INFO EnvHandler ExtHandler Routes:None Sep 5 00:00:01.469740 waagent[1931]: 2025-09-05T00:00:01.469665Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 5 00:00:01.469798 waagent[1931]: 2025-09-05T00:00:01.469754Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 5 00:00:01.476634 waagent[1931]: 2025-09-05T00:00:01.476541Z INFO ExtHandler ExtHandler Sep 5 00:00:01.477185 waagent[1931]: 2025-09-05T00:00:01.477125Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 83eb920f-9555-4a6e-82a1-e8a1fb969f3a correlation 67cc470f-9a22-40d0-91ce-540403fe4fa9 created: 2025-09-04T23:58:29.472191Z] Sep 5 00:00:01.478084 waagent[1931]: 2025-09-05T00:00:01.477998Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 5 00:00:01.479558 waagent[1931]: 2025-09-05T00:00:01.478704Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Sep 5 00:00:01.545849 waagent[1931]: 2025-09-05T00:00:01.545784Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: AC1FA0DC-834F-43E2-9D50-F82517B5BDC2;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Sep 5 00:00:01.595788 waagent[1931]: 2025-09-05T00:00:01.595689Z INFO MonitorHandler ExtHandler Network interfaces: Sep 5 00:00:01.595788 waagent[1931]: Executing ['ip', '-a', '-o', 'link']: Sep 5 00:00:01.595788 waagent[1931]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 5 00:00:01.595788 waagent[1931]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:79:a2:2a brd ff:ff:ff:ff:ff:ff Sep 5 00:00:01.595788 waagent[1931]: 3: enP27964s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:79:a2:2a brd ff:ff:ff:ff:ff:ff\ altname enP27964p0s2 Sep 5 00:00:01.595788 waagent[1931]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 5 00:00:01.595788 waagent[1931]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 5 00:00:01.595788 waagent[1931]: 2: eth0 inet 10.200.20.12/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 5 00:00:01.595788 waagent[1931]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 5 00:00:01.595788 waagent[1931]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Sep 5 00:00:01.595788 waagent[1931]: 2: eth0 inet6 fe80::222:48ff:fe79:a22a/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 5 00:00:01.720124 waagent[1931]: 2025-09-05T00:00:01.719450Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Sep 5 00:00:01.720124 waagent[1931]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 5 00:00:01.720124 waagent[1931]: pkts bytes target prot opt in out source destination Sep 5 00:00:01.720124 waagent[1931]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 5 00:00:01.720124 waagent[1931]: pkts bytes target prot opt in out source destination Sep 5 00:00:01.720124 waagent[1931]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 5 00:00:01.720124 waagent[1931]: pkts bytes target prot opt in out source destination Sep 5 00:00:01.720124 waagent[1931]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 5 00:00:01.720124 waagent[1931]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 5 00:00:01.720124 waagent[1931]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 5 00:00:01.722791 waagent[1931]: 2025-09-05T00:00:01.722715Z INFO EnvHandler ExtHandler Current Firewall rules: Sep 5 00:00:01.722791 waagent[1931]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 5 00:00:01.722791 waagent[1931]: pkts bytes target prot opt in out source destination Sep 5 00:00:01.722791 waagent[1931]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 5 00:00:01.722791 waagent[1931]: pkts bytes target prot opt in out source destination Sep 5 00:00:01.722791 waagent[1931]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 5 00:00:01.722791 waagent[1931]: pkts bytes target prot opt in out source destination Sep 5 00:00:01.722791 waagent[1931]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 5 00:00:01.722791 waagent[1931]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 5 00:00:01.722791 waagent[1931]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 5 00:00:01.723082 waagent[1931]: 2025-09-05T00:00:01.723037Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 5 00:00:04.517396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 5 00:00:04.527234 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:00:04.636659 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:00:04.650392 (kubelet)[2173]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:00:04.767082 kubelet[2173]: E0905 00:00:04.767001 2173 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:00:04.770305 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:00:04.770573 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:00:04.771167 systemd[1]: kubelet.service: Consumed 132ms CPU time, 109.4M memory peak. Sep 5 00:00:15.017555 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 5 00:00:15.024207 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:00:15.323466 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:00:15.327247 (kubelet)[2188]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:00:15.365979 kubelet[2188]: E0905 00:00:15.365916 2188 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:00:15.368254 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:00:15.368407 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:00:15.368706 systemd[1]: kubelet.service: Consumed 129ms CPU time, 107.2M memory peak. Sep 5 00:00:16.047523 chronyd[1696]: Selected source PHC0 Sep 5 00:00:17.946989 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 5 00:00:17.955286 systemd[1]: Started sshd@0-10.200.20.12:22-10.200.16.10:43994.service - OpenSSH per-connection server daemon (10.200.16.10:43994). Sep 5 00:00:18.655910 sshd[2196]: Accepted publickey for core from 10.200.16.10 port 43994 ssh2: RSA SHA256:aqxfi0PdaFlGLxH6dFeos6aMvDcgZd8ZfY1D2irauCI Sep 5 00:00:18.657181 sshd-session[2196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:00:18.661710 systemd-logind[1703]: New session 3 of user core. Sep 5 00:00:18.669183 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 5 00:00:19.118523 systemd[1]: Started sshd@1-10.200.20.12:22-10.200.16.10:44002.service - OpenSSH per-connection server daemon (10.200.16.10:44002). Sep 5 00:00:19.638554 sshd[2201]: Accepted publickey for core from 10.200.16.10 port 44002 ssh2: RSA SHA256:aqxfi0PdaFlGLxH6dFeos6aMvDcgZd8ZfY1D2irauCI Sep 5 00:00:19.639757 sshd-session[2201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:00:19.645065 systemd-logind[1703]: New session 4 of user core. Sep 5 00:00:19.657267 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 5 00:00:20.001980 sshd[2203]: Connection closed by 10.200.16.10 port 44002 Sep 5 00:00:20.001630 sshd-session[2201]: pam_unix(sshd:session): session closed for user core Sep 5 00:00:20.005326 systemd[1]: sshd@1-10.200.20.12:22-10.200.16.10:44002.service: Deactivated successfully. Sep 5 00:00:20.006895 systemd[1]: session-4.scope: Deactivated successfully. Sep 5 00:00:20.008190 systemd-logind[1703]: Session 4 logged out. Waiting for processes to exit. Sep 5 00:00:20.009309 systemd-logind[1703]: Removed session 4. Sep 5 00:00:20.085576 systemd[1]: Started sshd@2-10.200.20.12:22-10.200.16.10:60100.service - OpenSSH per-connection server daemon (10.200.16.10:60100). Sep 5 00:00:20.581489 sshd[2209]: Accepted publickey for core from 10.200.16.10 port 60100 ssh2: RSA SHA256:aqxfi0PdaFlGLxH6dFeos6aMvDcgZd8ZfY1D2irauCI Sep 5 00:00:20.582779 sshd-session[2209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:00:20.587228 systemd-logind[1703]: New session 5 of user core. Sep 5 00:00:20.593169 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 5 00:00:20.924909 sshd[2211]: Connection closed by 10.200.16.10 port 60100 Sep 5 00:00:20.924741 sshd-session[2209]: pam_unix(sshd:session): session closed for user core Sep 5 00:00:20.928219 systemd[1]: sshd@2-10.200.20.12:22-10.200.16.10:60100.service: Deactivated successfully. Sep 5 00:00:20.929736 systemd[1]: session-5.scope: Deactivated successfully. Sep 5 00:00:20.932761 systemd-logind[1703]: Session 5 logged out. Waiting for processes to exit. Sep 5 00:00:20.934825 systemd-logind[1703]: Removed session 5. Sep 5 00:00:21.020264 systemd[1]: Started sshd@3-10.200.20.12:22-10.200.16.10:60110.service - OpenSSH per-connection server daemon (10.200.16.10:60110). Sep 5 00:00:21.508297 sshd[2217]: Accepted publickey for core from 10.200.16.10 port 60110 ssh2: RSA SHA256:aqxfi0PdaFlGLxH6dFeos6aMvDcgZd8ZfY1D2irauCI Sep 5 00:00:21.509561 sshd-session[2217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:00:21.513595 systemd-logind[1703]: New session 6 of user core. Sep 5 00:00:21.524195 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 5 00:00:21.878421 sshd[2219]: Connection closed by 10.200.16.10 port 60110 Sep 5 00:00:21.879071 sshd-session[2217]: pam_unix(sshd:session): session closed for user core Sep 5 00:00:21.882420 systemd[1]: sshd@3-10.200.20.12:22-10.200.16.10:60110.service: Deactivated successfully. Sep 5 00:00:21.884297 systemd[1]: session-6.scope: Deactivated successfully. Sep 5 00:00:21.884953 systemd-logind[1703]: Session 6 logged out. Waiting for processes to exit. Sep 5 00:00:21.885816 systemd-logind[1703]: Removed session 6. Sep 5 00:00:21.976269 systemd[1]: Started sshd@4-10.200.20.12:22-10.200.16.10:60112.service - OpenSSH per-connection server daemon (10.200.16.10:60112). Sep 5 00:00:22.451925 sshd[2225]: Accepted publickey for core from 10.200.16.10 port 60112 ssh2: RSA SHA256:aqxfi0PdaFlGLxH6dFeos6aMvDcgZd8ZfY1D2irauCI Sep 5 00:00:22.453203 sshd-session[2225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:00:22.458943 systemd-logind[1703]: New session 7 of user core. Sep 5 00:00:22.464241 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 5 00:00:22.919506 sudo[2228]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 5 00:00:22.919770 sudo[2228]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:00:25.377098 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 5 00:00:25.383267 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 5 00:00:25.383406 (dockerd)[2245]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 5 00:00:25.386263 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:00:25.887183 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:00:25.887642 (kubelet)[2254]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:00:25.933365 kubelet[2254]: E0905 00:00:25.933288 2254 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:00:25.935831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:00:25.935974 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:00:25.936703 systemd[1]: kubelet.service: Consumed 138ms CPU time, 107.7M memory peak. Sep 5 00:00:27.016382 dockerd[2245]: time="2025-09-05T00:00:27.016124492Z" level=info msg="Starting up" Sep 5 00:00:27.523180 dockerd[2245]: time="2025-09-05T00:00:27.523137884Z" level=info msg="Loading containers: start." Sep 5 00:00:27.807173 kernel: Initializing XFRM netlink socket Sep 5 00:00:28.063987 systemd-networkd[1457]: docker0: Link UP Sep 5 00:00:28.097280 dockerd[2245]: time="2025-09-05T00:00:28.097229370Z" level=info msg="Loading containers: done." Sep 5 00:00:28.118061 dockerd[2245]: time="2025-09-05T00:00:28.117691503Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 5 00:00:28.118061 dockerd[2245]: time="2025-09-05T00:00:28.117795303Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 5 00:00:28.118061 dockerd[2245]: time="2025-09-05T00:00:28.117919703Z" level=info msg="Daemon has completed initialization" Sep 5 00:00:28.193215 dockerd[2245]: time="2025-09-05T00:00:28.193123258Z" level=info msg="API listen on /run/docker.sock" Sep 5 00:00:28.193398 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 5 00:00:29.064010 containerd[1723]: time="2025-09-05T00:00:29.063961631Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 5 00:00:29.942168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1461775548.mount: Deactivated successfully. Sep 5 00:00:31.121044 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Sep 5 00:00:31.509430 containerd[1723]: time="2025-09-05T00:00:31.509290270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:00:31.515435 containerd[1723]: time="2025-09-05T00:00:31.515193395Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=26328357" Sep 5 00:00:31.524898 containerd[1723]: time="2025-09-05T00:00:31.522946722Z" level=info msg="ImageCreate event name:\"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:00:31.530135 containerd[1723]: time="2025-09-05T00:00:31.530096087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:00:31.532037 containerd[1723]: time="2025-09-05T00:00:31.531402689Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"26325157\" in 2.467395896s" Sep 5 00:00:31.532037 containerd[1723]: time="2025-09-05T00:00:31.531438289Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\"" Sep 5 00:00:31.532661 containerd[1723]: time="2025-09-05T00:00:31.532532449Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 5 00:00:33.217085 containerd[1723]: time="2025-09-05T00:00:33.217008375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:00:33.223077 containerd[1723]: time="2025-09-05T00:00:33.223006190Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=22528552" Sep 5 00:00:33.228118 containerd[1723]: time="2025-09-05T00:00:33.228064283Z" level=info msg="ImageCreate event name:\"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:00:33.236596 containerd[1723]: time="2025-09-05T00:00:33.236530464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:00:33.238455 containerd[1723]: time="2025-09-05T00:00:33.237670067Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"24065666\" in 1.705107298s" Sep 5 00:00:33.238455 containerd[1723]: time="2025-09-05T00:00:33.237708627Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\"" Sep 5 00:00:33.238455 containerd[1723]: time="2025-09-05T00:00:33.238392829Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 5 00:00:34.698664 containerd[1723]: time="2025-09-05T00:00:34.698601501Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:00:34.701755 containerd[1723]: time="2025-09-05T00:00:34.701702109Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=17483527" Sep 5 00:00:34.705493 containerd[1723]: time="2025-09-05T00:00:34.705442118Z" level=info msg="ImageCreate event name:\"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:00:34.714448 containerd[1723]: time="2025-09-05T00:00:34.714383941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:00:34.715494 containerd[1723]: time="2025-09-05T00:00:34.715452704Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"19020659\" in 1.477033155s" Sep 5 00:00:34.715494 containerd[1723]: time="2025-09-05T00:00:34.715490664Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\"" Sep 5 00:00:34.716100 containerd[1723]: time="2025-09-05T00:00:34.715923865Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 5 00:00:35.962008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3590326421.mount: Deactivated successfully. Sep 5 00:00:35.964934 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 5 00:00:35.974525 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:00:36.096734 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:00:36.107325 (kubelet)[2518]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:00:36.209049 kubelet[2518]: E0905 00:00:36.208932 2518 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:00:36.211442 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:00:36.211596 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:00:36.212215 systemd[1]: kubelet.service: Consumed 134ms CPU time, 107.1M memory peak. Sep 5 00:00:36.926100 containerd[1723]: time="2025-09-05T00:00:36.926039703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:00:36.930546 containerd[1723]: time="2025-09-05T00:00:36.930391754Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=27376724" Sep 5 00:00:36.934220 containerd[1723]: time="2025-09-05T00:00:36.934162803Z" level=info msg="ImageCreate event name:\"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:00:36.940602 containerd[1723]: time="2025-09-05T00:00:36.940528979Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:00:36.941390 containerd[1723]: time="2025-09-05T00:00:36.941203261Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"27375743\" in 2.225250756s" Sep 5 00:00:36.941390 containerd[1723]: time="2025-09-05T00:00:36.941239541Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\"" Sep 5 00:00:36.941906 containerd[1723]: time="2025-09-05T00:00:36.941772542Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 5 00:00:37.550945 update_engine[1707]: I20250905 00:00:37.550398 1707 update_attempter.cc:509] Updating boot flags... Sep 5 00:00:37.584224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4219079832.mount: Deactivated successfully. Sep 5 00:00:37.661470 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2548) Sep 5 00:00:39.301069 containerd[1723]: time="2025-09-05T00:00:39.300623039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:00:39.305809 containerd[1723]: time="2025-09-05T00:00:39.305552725Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Sep 5 00:00:39.309243 containerd[1723]: time="2025-09-05T00:00:39.309187929Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:00:39.317736 containerd[1723]: time="2025-09-05T00:00:39.317651778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:00:39.319084 containerd[1723]: time="2025-09-05T00:00:39.318934500Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.377126158s" Sep 5 00:00:39.319084 containerd[1723]: time="2025-09-05T00:00:39.318968180Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 5 00:00:39.319683 containerd[1723]: time="2025-09-05T00:00:39.319551540Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 5 00:00:39.920168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2863726589.mount: Deactivated successfully. Sep 5 00:00:39.943547 containerd[1723]: time="2025-09-05T00:00:39.943499165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:00:39.947558 containerd[1723]: time="2025-09-05T00:00:39.947502129Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 5 00:00:39.951771 containerd[1723]: time="2025-09-05T00:00:39.951713334Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:00:39.961292 containerd[1723]: time="2025-09-05T00:00:39.961211545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:00:39.962073 containerd[1723]: time="2025-09-05T00:00:39.961908105Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 642.013364ms" Sep 5 00:00:39.962073 containerd[1723]: time="2025-09-05T00:00:39.961944985Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 5 00:00:39.962911 containerd[1723]: time="2025-09-05T00:00:39.962591866Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 5 00:00:40.820271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount246305552.mount: Deactivated successfully. Sep 5 00:00:44.848004 containerd[1723]: time="2025-09-05T00:00:44.847934055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:00:44.851093 containerd[1723]: time="2025-09-05T00:00:44.851038139Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Sep 5 00:00:44.854489 containerd[1723]: time="2025-09-05T00:00:44.854430743Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:00:44.862091 containerd[1723]: time="2025-09-05T00:00:44.862025194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:00:44.863283 containerd[1723]: time="2025-09-05T00:00:44.863239715Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 4.900617089s" Sep 5 00:00:44.863283 containerd[1723]: time="2025-09-05T00:00:44.863279275Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 5 00:00:46.267432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 5 00:00:46.276342 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:00:46.394214 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:00:46.397980 (kubelet)[2731]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:00:46.444957 kubelet[2731]: E0905 00:00:46.444900 2731 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:00:46.447322 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:00:46.447471 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:00:46.448282 systemd[1]: kubelet.service: Consumed 127ms CPU time, 108.7M memory peak. Sep 5 00:00:50.505965 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:00:50.506163 systemd[1]: kubelet.service: Consumed 127ms CPU time, 108.7M memory peak. Sep 5 00:00:50.515269 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:00:50.544064 systemd[1]: Reload requested from client PID 2746 ('systemctl') (unit session-7.scope)... Sep 5 00:00:50.544234 systemd[1]: Reloading... Sep 5 00:00:50.678340 zram_generator::config[2794]: No configuration found. Sep 5 00:00:50.778573 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:00:50.882278 systemd[1]: Reloading finished in 337 ms. Sep 5 00:00:50.932330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:00:50.936471 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:00:50.938336 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 00:00:50.938581 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:00:50.938626 systemd[1]: kubelet.service: Consumed 87ms CPU time, 94.9M memory peak. Sep 5 00:00:50.944459 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:00:51.051796 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:00:51.063630 (kubelet)[2862]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 00:00:51.098119 kubelet[2862]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:00:51.098119 kubelet[2862]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 00:00:51.098119 kubelet[2862]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:00:51.098521 kubelet[2862]: I0905 00:00:51.098216 2862 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 00:00:52.551778 kubelet[2862]: I0905 00:00:52.551730 2862 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 5 00:00:52.551778 kubelet[2862]: I0905 00:00:52.551769 2862 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 00:00:52.552148 kubelet[2862]: I0905 00:00:52.552064 2862 server.go:954] "Client rotation is on, will bootstrap in background" Sep 5 00:00:52.574155 kubelet[2862]: E0905 00:00:52.574113 2862 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:00:52.579458 kubelet[2862]: I0905 00:00:52.579260 2862 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:00:52.584100 kubelet[2862]: E0905 00:00:52.584061 2862 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 00:00:52.584100 kubelet[2862]: I0905 00:00:52.584093 2862 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 00:00:52.587113 kubelet[2862]: I0905 00:00:52.587084 2862 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 00:00:52.587903 kubelet[2862]: I0905 00:00:52.587869 2862 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 00:00:52.588100 kubelet[2862]: I0905 00:00:52.587905 2862 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.2-n-ff4909b759","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 00:00:52.588191 kubelet[2862]: I0905 00:00:52.588112 2862 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 00:00:52.588191 kubelet[2862]: I0905 00:00:52.588122 2862 container_manager_linux.go:304] "Creating device plugin manager" Sep 5 00:00:52.588271 kubelet[2862]: I0905 00:00:52.588249 2862 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:00:52.591299 kubelet[2862]: I0905 00:00:52.591280 2862 kubelet.go:446] "Attempting to sync node with API server" Sep 5 00:00:52.591346 kubelet[2862]: I0905 00:00:52.591308 2862 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 00:00:52.591346 kubelet[2862]: I0905 00:00:52.591328 2862 kubelet.go:352] "Adding apiserver pod source" Sep 5 00:00:52.591346 kubelet[2862]: I0905 00:00:52.591345 2862 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 00:00:52.599313 kubelet[2862]: W0905 00:00:52.598658 2862 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Sep 5 00:00:52.599313 kubelet[2862]: E0905 00:00:52.598723 2862 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:00:52.599313 kubelet[2862]: W0905 00:00:52.598988 2862 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-n-ff4909b759&limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Sep 5 00:00:52.599313 kubelet[2862]: E0905 00:00:52.599041 2862 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-n-ff4909b759&limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:00:52.599497 kubelet[2862]: I0905 00:00:52.599458 2862 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 5 00:00:52.599978 kubelet[2862]: I0905 00:00:52.599956 2862 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 00:00:52.600065 kubelet[2862]: W0905 00:00:52.600050 2862 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 5 00:00:52.600610 kubelet[2862]: I0905 00:00:52.600584 2862 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 00:00:52.600665 kubelet[2862]: I0905 00:00:52.600629 2862 server.go:1287] "Started kubelet" Sep 5 00:00:52.601820 kubelet[2862]: I0905 00:00:52.601328 2862 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 00:00:52.602424 kubelet[2862]: I0905 00:00:52.602405 2862 server.go:479] "Adding debug handlers to kubelet server" Sep 5 00:00:52.603873 kubelet[2862]: I0905 00:00:52.603798 2862 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 00:00:52.604138 kubelet[2862]: I0905 00:00:52.604118 2862 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 00:00:52.604883 kubelet[2862]: E0905 00:00:52.604278 2862 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.12:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.12:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.2.2-n-ff4909b759.186239e332cd9a1c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.2-n-ff4909b759,UID:ci-4230.2.2-n-ff4909b759,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.2.2-n-ff4909b759,},FirstTimestamp:2025-09-05 00:00:52.600609308 +0000 UTC m=+1.533441807,LastTimestamp:2025-09-05 00:00:52.600609308 +0000 UTC m=+1.533441807,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.2-n-ff4909b759,}" Sep 5 00:00:52.606457 kubelet[2862]: I0905 00:00:52.606433 2862 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 00:00:52.607348 kubelet[2862]: I0905 00:00:52.607316 2862 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 00:00:52.611488 kubelet[2862]: E0905 00:00:52.611456 2862 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 00:00:52.613723 kubelet[2862]: E0905 00:00:52.612938 2862 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-ff4909b759\" not found" Sep 5 00:00:52.613723 kubelet[2862]: I0905 00:00:52.612979 2862 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 00:00:52.613723 kubelet[2862]: I0905 00:00:52.613218 2862 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 00:00:52.613723 kubelet[2862]: I0905 00:00:52.613277 2862 reconciler.go:26] "Reconciler: start to sync state" Sep 5 00:00:52.614594 kubelet[2862]: E0905 00:00:52.614555 2862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-n-ff4909b759?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="200ms" Sep 5 00:00:52.614709 kubelet[2862]: W0905 00:00:52.614669 2862 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Sep 5 00:00:52.614745 kubelet[2862]: E0905 00:00:52.614715 2862 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:00:52.615831 kubelet[2862]: I0905 00:00:52.615804 2862 factory.go:221] Registration of the systemd container factory successfully Sep 5 00:00:52.615921 kubelet[2862]: I0905 00:00:52.615899 2862 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 00:00:52.617245 kubelet[2862]: I0905 00:00:52.617218 2862 factory.go:221] Registration of the containerd container factory successfully Sep 5 00:00:52.644670 kubelet[2862]: I0905 00:00:52.644644 2862 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 00:00:52.644969 kubelet[2862]: I0905 00:00:52.644809 2862 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 00:00:52.644969 kubelet[2862]: I0905 00:00:52.644832 2862 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:00:52.652088 kubelet[2862]: I0905 00:00:52.652059 2862 policy_none.go:49] "None policy: Start" Sep 5 00:00:52.652088 kubelet[2862]: I0905 00:00:52.652084 2862 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 00:00:52.652192 kubelet[2862]: I0905 00:00:52.652095 2862 state_mem.go:35] "Initializing new in-memory state store" Sep 5 00:00:52.653746 kubelet[2862]: I0905 00:00:52.653595 2862 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 00:00:52.654658 kubelet[2862]: I0905 00:00:52.654638 2862 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 00:00:52.656200 kubelet[2862]: I0905 00:00:52.654713 2862 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 5 00:00:52.656200 kubelet[2862]: I0905 00:00:52.654738 2862 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 00:00:52.656200 kubelet[2862]: I0905 00:00:52.654744 2862 kubelet.go:2382] "Starting kubelet main sync loop" Sep 5 00:00:52.656200 kubelet[2862]: E0905 00:00:52.654787 2862 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 00:00:52.657382 kubelet[2862]: W0905 00:00:52.657111 2862 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Sep 5 00:00:52.657382 kubelet[2862]: E0905 00:00:52.657153 2862 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:00:52.664037 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 5 00:00:52.676328 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 5 00:00:52.680113 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 5 00:00:52.688019 kubelet[2862]: I0905 00:00:52.687983 2862 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 00:00:52.688266 kubelet[2862]: I0905 00:00:52.688238 2862 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 00:00:52.688300 kubelet[2862]: I0905 00:00:52.688260 2862 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 00:00:52.688725 kubelet[2862]: I0905 00:00:52.688535 2862 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 00:00:52.689952 kubelet[2862]: E0905 00:00:52.689931 2862 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 00:00:52.690261 kubelet[2862]: E0905 00:00:52.690215 2862 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.2.2-n-ff4909b759\" not found" Sep 5 00:00:52.766712 systemd[1]: Created slice kubepods-burstable-pod5d9c44cf9510b25e64bdc69171db9d30.slice - libcontainer container kubepods-burstable-pod5d9c44cf9510b25e64bdc69171db9d30.slice. Sep 5 00:00:52.775898 kubelet[2862]: E0905 00:00:52.775765 2862 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-ff4909b759\" not found" node="ci-4230.2.2-n-ff4909b759" Sep 5 00:00:52.778692 systemd[1]: Created slice kubepods-burstable-pod4c1843fdc90b55bed1a16071762c0fa6.slice - libcontainer container kubepods-burstable-pod4c1843fdc90b55bed1a16071762c0fa6.slice. Sep 5 00:00:52.781612 kubelet[2862]: E0905 00:00:52.781580 2862 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-ff4909b759\" not found" node="ci-4230.2.2-n-ff4909b759" Sep 5 00:00:52.784485 systemd[1]: Created slice kubepods-burstable-podbf37c260e0bff2519887b95409ab725f.slice - libcontainer container kubepods-burstable-podbf37c260e0bff2519887b95409ab725f.slice. Sep 5 00:00:52.786110 kubelet[2862]: E0905 00:00:52.786078 2862 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-ff4909b759\" not found" node="ci-4230.2.2-n-ff4909b759" Sep 5 00:00:52.789683 kubelet[2862]: I0905 00:00:52.789655 2862 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-n-ff4909b759" Sep 5 00:00:52.790121 kubelet[2862]: E0905 00:00:52.790091 2862 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-4230.2.2-n-ff4909b759" Sep 5 00:00:52.815612 kubelet[2862]: E0905 00:00:52.815485 2862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-n-ff4909b759?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="400ms" Sep 5 00:00:52.915092 kubelet[2862]: I0905 00:00:52.914946 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5d9c44cf9510b25e64bdc69171db9d30-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.2-n-ff4909b759\" (UID: \"5d9c44cf9510b25e64bdc69171db9d30\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-ff4909b759" Sep 5 00:00:52.915092 kubelet[2862]: I0905 00:00:52.914999 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bf37c260e0bff2519887b95409ab725f-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.2-n-ff4909b759\" (UID: \"bf37c260e0bff2519887b95409ab725f\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-ff4909b759" Sep 5 00:00:52.915092 kubelet[2862]: I0905 00:00:52.915032 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bf37c260e0bff2519887b95409ab725f-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.2-n-ff4909b759\" (UID: \"bf37c260e0bff2519887b95409ab725f\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-ff4909b759" Sep 5 00:00:52.915092 kubelet[2862]: I0905 00:00:52.915050 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bf37c260e0bff2519887b95409ab725f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.2-n-ff4909b759\" (UID: \"bf37c260e0bff2519887b95409ab725f\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-ff4909b759" Sep 5 00:00:52.915092 kubelet[2862]: I0905 00:00:52.915066 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5d9c44cf9510b25e64bdc69171db9d30-ca-certs\") pod \"kube-apiserver-ci-4230.2.2-n-ff4909b759\" (UID: \"5d9c44cf9510b25e64bdc69171db9d30\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-ff4909b759" Sep 5 00:00:52.915365 kubelet[2862]: I0905 00:00:52.915084 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bf37c260e0bff2519887b95409ab725f-ca-certs\") pod \"kube-controller-manager-ci-4230.2.2-n-ff4909b759\" (UID: \"bf37c260e0bff2519887b95409ab725f\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-ff4909b759" Sep 5 00:00:52.915365 kubelet[2862]: I0905 00:00:52.915098 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bf37c260e0bff2519887b95409ab725f-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.2-n-ff4909b759\" (UID: \"bf37c260e0bff2519887b95409ab725f\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-ff4909b759" Sep 5 00:00:52.915365 kubelet[2862]: I0905 00:00:52.915113 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4c1843fdc90b55bed1a16071762c0fa6-kubeconfig\") pod \"kube-scheduler-ci-4230.2.2-n-ff4909b759\" (UID: \"4c1843fdc90b55bed1a16071762c0fa6\") " pod="kube-system/kube-scheduler-ci-4230.2.2-n-ff4909b759" Sep 5 00:00:52.915365 kubelet[2862]: I0905 00:00:52.915128 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5d9c44cf9510b25e64bdc69171db9d30-k8s-certs\") pod \"kube-apiserver-ci-4230.2.2-n-ff4909b759\" (UID: \"5d9c44cf9510b25e64bdc69171db9d30\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-ff4909b759" Sep 5 00:00:52.991914 kubelet[2862]: I0905 00:00:52.991880 2862 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-n-ff4909b759" Sep 5 00:00:52.992284 kubelet[2862]: E0905 00:00:52.992256 2862 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-4230.2.2-n-ff4909b759" Sep 5 00:00:53.077978 containerd[1723]: time="2025-09-05T00:00:53.077642640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.2-n-ff4909b759,Uid:5d9c44cf9510b25e64bdc69171db9d30,Namespace:kube-system,Attempt:0,}" Sep 5 00:00:53.083384 containerd[1723]: time="2025-09-05T00:00:53.083342687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.2-n-ff4909b759,Uid:4c1843fdc90b55bed1a16071762c0fa6,Namespace:kube-system,Attempt:0,}" Sep 5 00:00:53.087651 containerd[1723]: time="2025-09-05T00:00:53.087337492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.2-n-ff4909b759,Uid:bf37c260e0bff2519887b95409ab725f,Namespace:kube-system,Attempt:0,}" Sep 5 00:00:53.216718 kubelet[2862]: E0905 00:00:53.216678 2862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-n-ff4909b759?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="800ms" Sep 5 00:00:53.394078 kubelet[2862]: I0905 00:00:53.393935 2862 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-n-ff4909b759" Sep 5 00:00:53.394339 kubelet[2862]: E0905 00:00:53.394307 2862 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-4230.2.2-n-ff4909b759" Sep 5 00:00:53.436750 kubelet[2862]: W0905 00:00:53.436675 2862 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-n-ff4909b759&limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Sep 5 00:00:53.436750 kubelet[2862]: E0905 00:00:53.436755 2862 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-n-ff4909b759&limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:00:53.586952 kubelet[2862]: W0905 00:00:53.586913 2862 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Sep 5 00:00:53.587330 kubelet[2862]: E0905 00:00:53.586959 2862 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:00:53.851495 kubelet[2862]: W0905 00:00:53.851427 2862 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Sep 5 00:00:53.851495 kubelet[2862]: E0905 00:00:53.851498 2862 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:00:53.868133 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3097595651.mount: Deactivated successfully. Sep 5 00:00:53.955260 containerd[1723]: time="2025-09-05T00:00:53.955191525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:00:54.017316 kubelet[2862]: E0905 00:00:54.017262 2862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-n-ff4909b759?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="1.6s" Sep 5 00:00:54.071926 kubelet[2862]: W0905 00:00:54.071860 2862 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Sep 5 00:00:54.071926 kubelet[2862]: E0905 00:00:54.071917 2862 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:00:54.197005 kubelet[2862]: I0905 00:00:54.196661 2862 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-n-ff4909b759" Sep 5 00:00:54.197235 kubelet[2862]: E0905 00:00:54.197208 2862 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-4230.2.2-n-ff4909b759" Sep 5 00:00:54.308030 containerd[1723]: time="2025-09-05T00:00:54.307946818Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Sep 5 00:00:54.312265 containerd[1723]: time="2025-09-05T00:00:54.312226383Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:00:54.322950 containerd[1723]: time="2025-09-05T00:00:54.322861837Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:00:54.327151 containerd[1723]: time="2025-09-05T00:00:54.327077762Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 00:00:54.331043 containerd[1723]: time="2025-09-05T00:00:54.330710887Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:00:54.333864 containerd[1723]: time="2025-09-05T00:00:54.333810851Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 00:00:54.340011 containerd[1723]: time="2025-09-05T00:00:54.338913297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:00:54.340387 containerd[1723]: time="2025-09-05T00:00:54.340353299Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.252870606s" Sep 5 00:00:54.342963 containerd[1723]: time="2025-09-05T00:00:54.342920903Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.265194503s" Sep 5 00:00:54.349151 containerd[1723]: time="2025-09-05T00:00:54.348396710Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.264972423s" Sep 5 00:00:54.753670 kubelet[2862]: E0905 00:00:54.753617 2862 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:00:55.049747 kubelet[2862]: W0905 00:00:55.049703 2862 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-n-ff4909b759&limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Sep 5 00:00:55.049747 kubelet[2862]: E0905 00:00:55.049753 2862 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-n-ff4909b759&limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:00:55.463327 containerd[1723]: time="2025-09-05T00:00:55.460451256Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:00:55.463327 containerd[1723]: time="2025-09-05T00:00:55.462786099Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:00:55.463327 containerd[1723]: time="2025-09-05T00:00:55.462802899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:00:55.463327 containerd[1723]: time="2025-09-05T00:00:55.462893419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:00:55.471454 containerd[1723]: time="2025-09-05T00:00:55.468773106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:00:55.471454 containerd[1723]: time="2025-09-05T00:00:55.471153949Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:00:55.471454 containerd[1723]: time="2025-09-05T00:00:55.471178269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:00:55.473305 containerd[1723]: time="2025-09-05T00:00:55.472211031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:00:55.473673 containerd[1723]: time="2025-09-05T00:00:55.473315232Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:00:55.473673 containerd[1723]: time="2025-09-05T00:00:55.473344152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:00:55.473673 containerd[1723]: time="2025-09-05T00:00:55.473420192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:00:55.473890 containerd[1723]: time="2025-09-05T00:00:55.473801433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:00:55.520196 systemd[1]: Started cri-containerd-3b7c9aed77e0da776bc1152606bf2851ae15ba5b33f94a1a54518b4ff8428fc9.scope - libcontainer container 3b7c9aed77e0da776bc1152606bf2851ae15ba5b33f94a1a54518b4ff8428fc9. Sep 5 00:00:55.521274 systemd[1]: Started cri-containerd-c30a6278cdbde02be5cc3e373e7e7ebae75aad9198a1b312821fb0627c058686.scope - libcontainer container c30a6278cdbde02be5cc3e373e7e7ebae75aad9198a1b312821fb0627c058686. Sep 5 00:00:55.527647 systemd[1]: Started cri-containerd-7b63f7dd02e6925dabb0b36eb99a6cae95e47344b0503d743999d0f55c8affa6.scope - libcontainer container 7b63f7dd02e6925dabb0b36eb99a6cae95e47344b0503d743999d0f55c8affa6. Sep 5 00:00:55.571964 containerd[1723]: time="2025-09-05T00:00:55.571837959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.2-n-ff4909b759,Uid:5d9c44cf9510b25e64bdc69171db9d30,Namespace:kube-system,Attempt:0,} returns sandbox id \"c30a6278cdbde02be5cc3e373e7e7ebae75aad9198a1b312821fb0627c058686\"" Sep 5 00:00:55.580135 containerd[1723]: time="2025-09-05T00:00:55.579969409Z" level=info msg="CreateContainer within sandbox \"c30a6278cdbde02be5cc3e373e7e7ebae75aad9198a1b312821fb0627c058686\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 5 00:00:55.588634 containerd[1723]: time="2025-09-05T00:00:55.588590100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.2-n-ff4909b759,Uid:bf37c260e0bff2519887b95409ab725f,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b63f7dd02e6925dabb0b36eb99a6cae95e47344b0503d743999d0f55c8affa6\"" Sep 5 00:00:55.591870 containerd[1723]: time="2025-09-05T00:00:55.591837344Z" level=info msg="CreateContainer within sandbox \"7b63f7dd02e6925dabb0b36eb99a6cae95e47344b0503d743999d0f55c8affa6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 5 00:00:55.597922 containerd[1723]: time="2025-09-05T00:00:55.597863392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.2-n-ff4909b759,Uid:4c1843fdc90b55bed1a16071762c0fa6,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b7c9aed77e0da776bc1152606bf2851ae15ba5b33f94a1a54518b4ff8428fc9\"" Sep 5 00:00:55.600883 containerd[1723]: time="2025-09-05T00:00:55.600847156Z" level=info msg="CreateContainer within sandbox \"3b7c9aed77e0da776bc1152606bf2851ae15ba5b33f94a1a54518b4ff8428fc9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 5 00:00:55.618459 kubelet[2862]: E0905 00:00:55.618412 2862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-n-ff4909b759?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="3.2s" Sep 5 00:00:55.655722 containerd[1723]: time="2025-09-05T00:00:55.655256826Z" level=info msg="CreateContainer within sandbox \"c30a6278cdbde02be5cc3e373e7e7ebae75aad9198a1b312821fb0627c058686\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"77c9839f43117baef9acec0a15fc58b4c0a84fe8e72b8b110237632bbb2b65aa\"" Sep 5 00:00:55.656196 containerd[1723]: time="2025-09-05T00:00:55.655954986Z" level=info msg="StartContainer for \"77c9839f43117baef9acec0a15fc58b4c0a84fe8e72b8b110237632bbb2b65aa\"" Sep 5 00:00:55.688216 containerd[1723]: time="2025-09-05T00:00:55.688166508Z" level=info msg="CreateContainer within sandbox \"3b7c9aed77e0da776bc1152606bf2851ae15ba5b33f94a1a54518b4ff8428fc9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a2fcb6dc51772bfa07db4b523b0eb60793060dfc9a3fff32e69733e21d07c44c\"" Sep 5 00:00:55.689829 containerd[1723]: time="2025-09-05T00:00:55.688663748Z" level=info msg="StartContainer for \"a2fcb6dc51772bfa07db4b523b0eb60793060dfc9a3fff32e69733e21d07c44c\"" Sep 5 00:00:55.689379 systemd[1]: Started cri-containerd-77c9839f43117baef9acec0a15fc58b4c0a84fe8e72b8b110237632bbb2b65aa.scope - libcontainer container 77c9839f43117baef9acec0a15fc58b4c0a84fe8e72b8b110237632bbb2b65aa. Sep 5 00:00:55.707464 containerd[1723]: time="2025-09-05T00:00:55.707400812Z" level=info msg="CreateContainer within sandbox \"7b63f7dd02e6925dabb0b36eb99a6cae95e47344b0503d743999d0f55c8affa6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"be4b0c7cc831372b94f32c4e0dfdc6461dda6fd1b8d82f2433b0d9ad78ac0184\"" Sep 5 00:00:55.711075 containerd[1723]: time="2025-09-05T00:00:55.708745854Z" level=info msg="StartContainer for \"be4b0c7cc831372b94f32c4e0dfdc6461dda6fd1b8d82f2433b0d9ad78ac0184\"" Sep 5 00:00:55.723732 systemd[1]: Started cri-containerd-a2fcb6dc51772bfa07db4b523b0eb60793060dfc9a3fff32e69733e21d07c44c.scope - libcontainer container a2fcb6dc51772bfa07db4b523b0eb60793060dfc9a3fff32e69733e21d07c44c. Sep 5 00:00:55.756280 containerd[1723]: time="2025-09-05T00:00:55.756104715Z" level=info msg="StartContainer for \"77c9839f43117baef9acec0a15fc58b4c0a84fe8e72b8b110237632bbb2b65aa\" returns successfully" Sep 5 00:00:55.759237 systemd[1]: Started cri-containerd-be4b0c7cc831372b94f32c4e0dfdc6461dda6fd1b8d82f2433b0d9ad78ac0184.scope - libcontainer container be4b0c7cc831372b94f32c4e0dfdc6461dda6fd1b8d82f2433b0d9ad78ac0184. Sep 5 00:00:55.803056 kubelet[2862]: I0905 00:00:55.801522 2862 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-n-ff4909b759" Sep 5 00:00:55.803056 kubelet[2862]: E0905 00:00:55.801916 2862 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-4230.2.2-n-ff4909b759" Sep 5 00:00:55.820129 containerd[1723]: time="2025-09-05T00:00:55.820074437Z" level=info msg="StartContainer for \"be4b0c7cc831372b94f32c4e0dfdc6461dda6fd1b8d82f2433b0d9ad78ac0184\" returns successfully" Sep 5 00:00:55.820276 containerd[1723]: time="2025-09-05T00:00:55.820179797Z" level=info msg="StartContainer for \"a2fcb6dc51772bfa07db4b523b0eb60793060dfc9a3fff32e69733e21d07c44c\" returns successfully" Sep 5 00:00:56.674542 kubelet[2862]: E0905 00:00:56.674516 2862 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-ff4909b759\" not found" node="ci-4230.2.2-n-ff4909b759" Sep 5 00:00:56.677047 kubelet[2862]: E0905 00:00:56.677010 2862 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-ff4909b759\" not found" node="ci-4230.2.2-n-ff4909b759" Sep 5 00:00:56.679899 kubelet[2862]: E0905 00:00:56.679080 2862 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-ff4909b759\" not found" node="ci-4230.2.2-n-ff4909b759" Sep 5 00:00:57.684035 kubelet[2862]: E0905 00:00:57.681594 2862 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-ff4909b759\" not found" node="ci-4230.2.2-n-ff4909b759" Sep 5 00:00:57.684035 kubelet[2862]: E0905 00:00:57.681704 2862 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-ff4909b759\" not found" node="ci-4230.2.2-n-ff4909b759" Sep 5 00:00:57.684035 kubelet[2862]: E0905 00:00:57.681925 2862 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-ff4909b759\" not found" node="ci-4230.2.2-n-ff4909b759" Sep 5 00:00:57.855155 kubelet[2862]: E0905 00:00:57.855117 2862 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4230.2.2-n-ff4909b759" not found Sep 5 00:00:58.221000 kubelet[2862]: E0905 00:00:58.220956 2862 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4230.2.2-n-ff4909b759" not found Sep 5 00:00:58.672961 kubelet[2862]: E0905 00:00:58.672926 2862 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4230.2.2-n-ff4909b759" not found Sep 5 00:00:58.685071 kubelet[2862]: E0905 00:00:58.683614 2862 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-ff4909b759\" not found" node="ci-4230.2.2-n-ff4909b759" Sep 5 00:00:58.685071 kubelet[2862]: E0905 00:00:58.683757 2862 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-ff4909b759\" not found" node="ci-4230.2.2-n-ff4909b759" Sep 5 00:00:58.840287 kubelet[2862]: E0905 00:00:58.840238 2862 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.2.2-n-ff4909b759\" not found" node="ci-4230.2.2-n-ff4909b759" Sep 5 00:00:59.006048 kubelet[2862]: I0905 00:00:59.005567 2862 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-n-ff4909b759" Sep 5 00:00:59.012927 kubelet[2862]: I0905 00:00:59.012885 2862 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.2-n-ff4909b759" Sep 5 00:00:59.012927 kubelet[2862]: E0905 00:00:59.012924 2862 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4230.2.2-n-ff4909b759\": node \"ci-4230.2.2-n-ff4909b759\" not found" Sep 5 00:00:59.023919 kubelet[2862]: E0905 00:00:59.023884 2862 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-ff4909b759\" not found" Sep 5 00:00:59.125588 kubelet[2862]: E0905 00:00:59.125539 2862 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-ff4909b759\" not found" Sep 5 00:00:59.225758 kubelet[2862]: E0905 00:00:59.225708 2862 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-ff4909b759\" not found" Sep 5 00:00:59.325976 kubelet[2862]: E0905 00:00:59.325939 2862 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-ff4909b759\" not found" Sep 5 00:00:59.426376 kubelet[2862]: E0905 00:00:59.426326 2862 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-ff4909b759\" not found" Sep 5 00:00:59.527085 kubelet[2862]: E0905 00:00:59.527036 2862 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-ff4909b759\" not found" Sep 5 00:00:59.627824 kubelet[2862]: E0905 00:00:59.627694 2862 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-ff4909b759\" not found" Sep 5 00:00:59.728253 kubelet[2862]: E0905 00:00:59.728205 2862 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-ff4909b759\" not found" Sep 5 00:00:59.828367 kubelet[2862]: E0905 00:00:59.828323 2862 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-ff4909b759\" not found" Sep 5 00:00:59.929170 kubelet[2862]: E0905 00:00:59.929057 2862 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-ff4909b759\" not found" Sep 5 00:01:00.029610 kubelet[2862]: E0905 00:01:00.029569 2862 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-ff4909b759\" not found" Sep 5 00:01:00.099471 systemd[1]: Reload requested from client PID 3143 ('systemctl') (unit session-7.scope)... Sep 5 00:01:00.099781 systemd[1]: Reloading... Sep 5 00:01:00.130061 kubelet[2862]: E0905 00:01:00.129751 2862 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-ff4909b759\" not found" node="ci-4230.2.2-n-ff4909b759" Sep 5 00:01:00.130216 kubelet[2862]: E0905 00:01:00.130201 2862 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-ff4909b759\" not found" Sep 5 00:01:00.207790 kubelet[2862]: E0905 00:01:00.207419 2862 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-ff4909b759\" not found" node="ci-4230.2.2-n-ff4909b759" Sep 5 00:01:00.218058 zram_generator::config[3191]: No configuration found. Sep 5 00:01:00.230410 kubelet[2862]: E0905 00:01:00.230363 2862 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-ff4909b759\" not found" Sep 5 00:01:00.331371 kubelet[2862]: E0905 00:01:00.331302 2862 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-ff4909b759\" not found" Sep 5 00:01:00.331954 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:01:00.432142 kubelet[2862]: E0905 00:01:00.432097 2862 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-ff4909b759\" not found" Sep 5 00:01:00.451971 systemd[1]: Reloading finished in 351 ms. Sep 5 00:01:00.473651 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:01:00.487609 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 00:01:00.487971 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:01:00.488052 systemd[1]: kubelet.service: Consumed 1.879s CPU time, 127.1M memory peak. Sep 5 00:01:00.492718 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:01:00.605688 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:01:00.617434 (kubelet)[3254]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 00:01:00.661170 kubelet[3254]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:01:00.661170 kubelet[3254]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 00:01:00.661170 kubelet[3254]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:01:00.661528 kubelet[3254]: I0905 00:01:00.661230 3254 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 00:01:00.666954 kubelet[3254]: I0905 00:01:00.666914 3254 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 5 00:01:00.666954 kubelet[3254]: I0905 00:01:00.666945 3254 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 00:01:00.667243 kubelet[3254]: I0905 00:01:00.667223 3254 server.go:954] "Client rotation is on, will bootstrap in background" Sep 5 00:01:00.668495 kubelet[3254]: I0905 00:01:00.668473 3254 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 5 00:01:00.721968 kubelet[3254]: I0905 00:01:00.721544 3254 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:01:00.727124 kubelet[3254]: E0905 00:01:00.725827 3254 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 00:01:00.727124 kubelet[3254]: I0905 00:01:00.725862 3254 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 00:01:00.732324 kubelet[3254]: I0905 00:01:00.732052 3254 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 00:01:00.732324 kubelet[3254]: I0905 00:01:00.732255 3254 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 00:01:00.733181 kubelet[3254]: I0905 00:01:00.732281 3254 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.2-n-ff4909b759","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 00:01:00.733181 kubelet[3254]: I0905 00:01:00.733053 3254 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 00:01:00.733181 kubelet[3254]: I0905 00:01:00.733069 3254 container_manager_linux.go:304] "Creating device plugin manager" Sep 5 00:01:00.733181 kubelet[3254]: I0905 00:01:00.733176 3254 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:01:00.733991 kubelet[3254]: I0905 00:01:00.733750 3254 kubelet.go:446] "Attempting to sync node with API server" Sep 5 00:01:00.733991 kubelet[3254]: I0905 00:01:00.733780 3254 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 00:01:00.733991 kubelet[3254]: I0905 00:01:00.733802 3254 kubelet.go:352] "Adding apiserver pod source" Sep 5 00:01:00.733991 kubelet[3254]: I0905 00:01:00.733812 3254 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 00:01:00.735207 kubelet[3254]: I0905 00:01:00.735181 3254 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 5 00:01:00.735680 kubelet[3254]: I0905 00:01:00.735650 3254 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 00:01:00.736756 kubelet[3254]: I0905 00:01:00.736085 3254 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 00:01:00.736756 kubelet[3254]: I0905 00:01:00.736120 3254 server.go:1287] "Started kubelet" Sep 5 00:01:00.742944 kubelet[3254]: I0905 00:01:00.742895 3254 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 00:01:00.743583 kubelet[3254]: I0905 00:01:00.743563 3254 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 00:01:00.743695 kubelet[3254]: I0905 00:01:00.743579 3254 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 00:01:00.743789 kubelet[3254]: I0905 00:01:00.743756 3254 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 00:01:00.753294 kubelet[3254]: E0905 00:01:00.753259 3254 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 00:01:00.755078 kubelet[3254]: I0905 00:01:00.755042 3254 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 00:01:00.756701 kubelet[3254]: I0905 00:01:00.756679 3254 server.go:479] "Adding debug handlers to kubelet server" Sep 5 00:01:00.759084 kubelet[3254]: I0905 00:01:00.759066 3254 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 00:01:00.759257 kubelet[3254]: I0905 00:01:00.759244 3254 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 00:01:00.759801 kubelet[3254]: E0905 00:01:00.759364 3254 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-ff4909b759\" not found" Sep 5 00:01:00.759921 kubelet[3254]: I0905 00:01:00.759910 3254 reconciler.go:26] "Reconciler: start to sync state" Sep 5 00:01:00.766195 kubelet[3254]: I0905 00:01:00.766166 3254 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 00:01:00.767605 kubelet[3254]: I0905 00:01:00.767397 3254 factory.go:221] Registration of the systemd container factory successfully Sep 5 00:01:00.767605 kubelet[3254]: I0905 00:01:00.767501 3254 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 00:01:00.767770 kubelet[3254]: I0905 00:01:00.767755 3254 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 00:01:00.767955 kubelet[3254]: I0905 00:01:00.767855 3254 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 5 00:01:00.767955 kubelet[3254]: I0905 00:01:00.767892 3254 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 00:01:00.767955 kubelet[3254]: I0905 00:01:00.767902 3254 kubelet.go:2382] "Starting kubelet main sync loop" Sep 5 00:01:00.768146 kubelet[3254]: E0905 00:01:00.767939 3254 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 00:01:00.777760 kubelet[3254]: I0905 00:01:00.777523 3254 factory.go:221] Registration of the containerd container factory successfully Sep 5 00:01:00.838915 kubelet[3254]: I0905 00:01:00.838873 3254 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 00:01:00.838915 kubelet[3254]: I0905 00:01:00.838901 3254 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 00:01:00.838915 kubelet[3254]: I0905 00:01:00.838921 3254 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:01:01.007440 kubelet[3254]: E0905 00:01:00.868218 3254 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 00:01:01.007440 kubelet[3254]: I0905 00:01:01.006229 3254 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 5 00:01:01.007440 kubelet[3254]: I0905 00:01:01.006251 3254 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 5 00:01:01.007440 kubelet[3254]: I0905 00:01:01.006288 3254 policy_none.go:49] "None policy: Start" Sep 5 00:01:01.007440 kubelet[3254]: I0905 00:01:01.006301 3254 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 00:01:01.007440 kubelet[3254]: I0905 00:01:01.006313 3254 state_mem.go:35] "Initializing new in-memory state store" Sep 5 00:01:01.007440 kubelet[3254]: I0905 00:01:01.006417 3254 state_mem.go:75] "Updated machine memory state" Sep 5 00:01:01.011875 kubelet[3254]: I0905 00:01:01.011828 3254 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 00:01:01.012503 kubelet[3254]: I0905 00:01:01.012327 3254 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 00:01:01.012503 kubelet[3254]: I0905 00:01:01.012349 3254 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 00:01:01.012693 kubelet[3254]: I0905 00:01:01.012666 3254 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 00:01:01.014324 kubelet[3254]: E0905 00:01:01.014196 3254 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 00:01:01.069689 kubelet[3254]: I0905 00:01:01.069634 3254 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.2-n-ff4909b759" Sep 5 00:01:01.070169 kubelet[3254]: I0905 00:01:01.070041 3254 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.2-n-ff4909b759" Sep 5 00:01:01.072218 kubelet[3254]: I0905 00:01:01.072164 3254 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.2-n-ff4909b759" Sep 5 00:01:01.083931 kubelet[3254]: W0905 00:01:01.083891 3254 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 5 00:01:01.089094 kubelet[3254]: W0905 00:01:01.088718 3254 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 5 00:01:01.089094 kubelet[3254]: W0905 00:01:01.088937 3254 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 5 00:01:01.122839 kubelet[3254]: I0905 00:01:01.122651 3254 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-n-ff4909b759" Sep 5 00:01:01.140560 kubelet[3254]: I0905 00:01:01.140516 3254 kubelet_node_status.go:124] "Node was previously registered" node="ci-4230.2.2-n-ff4909b759" Sep 5 00:01:01.140721 kubelet[3254]: I0905 00:01:01.140610 3254 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.2-n-ff4909b759" Sep 5 00:01:01.161821 kubelet[3254]: I0905 00:01:01.161777 3254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bf37c260e0bff2519887b95409ab725f-ca-certs\") pod \"kube-controller-manager-ci-4230.2.2-n-ff4909b759\" (UID: \"bf37c260e0bff2519887b95409ab725f\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-ff4909b759" Sep 5 00:01:01.161821 kubelet[3254]: I0905 00:01:01.161822 3254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bf37c260e0bff2519887b95409ab725f-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.2-n-ff4909b759\" (UID: \"bf37c260e0bff2519887b95409ab725f\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-ff4909b759" Sep 5 00:01:01.161995 kubelet[3254]: I0905 00:01:01.161843 3254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bf37c260e0bff2519887b95409ab725f-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.2-n-ff4909b759\" (UID: \"bf37c260e0bff2519887b95409ab725f\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-ff4909b759" Sep 5 00:01:01.161995 kubelet[3254]: I0905 00:01:01.161858 3254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bf37c260e0bff2519887b95409ab725f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.2-n-ff4909b759\" (UID: \"bf37c260e0bff2519887b95409ab725f\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-ff4909b759" Sep 5 00:01:01.161995 kubelet[3254]: I0905 00:01:01.161877 3254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bf37c260e0bff2519887b95409ab725f-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.2-n-ff4909b759\" (UID: \"bf37c260e0bff2519887b95409ab725f\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-ff4909b759" Sep 5 00:01:01.161995 kubelet[3254]: I0905 00:01:01.161893 3254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4c1843fdc90b55bed1a16071762c0fa6-kubeconfig\") pod \"kube-scheduler-ci-4230.2.2-n-ff4909b759\" (UID: \"4c1843fdc90b55bed1a16071762c0fa6\") " pod="kube-system/kube-scheduler-ci-4230.2.2-n-ff4909b759" Sep 5 00:01:01.161995 kubelet[3254]: I0905 00:01:01.161909 3254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5d9c44cf9510b25e64bdc69171db9d30-ca-certs\") pod \"kube-apiserver-ci-4230.2.2-n-ff4909b759\" (UID: \"5d9c44cf9510b25e64bdc69171db9d30\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-ff4909b759" Sep 5 00:01:01.162202 kubelet[3254]: I0905 00:01:01.161924 3254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5d9c44cf9510b25e64bdc69171db9d30-k8s-certs\") pod \"kube-apiserver-ci-4230.2.2-n-ff4909b759\" (UID: \"5d9c44cf9510b25e64bdc69171db9d30\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-ff4909b759" Sep 5 00:01:01.162202 kubelet[3254]: I0905 00:01:01.161940 3254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5d9c44cf9510b25e64bdc69171db9d30-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.2-n-ff4909b759\" (UID: \"5d9c44cf9510b25e64bdc69171db9d30\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-ff4909b759" Sep 5 00:01:01.735038 kubelet[3254]: I0905 00:01:01.734983 3254 apiserver.go:52] "Watching apiserver" Sep 5 00:01:01.760044 kubelet[3254]: I0905 00:01:01.759992 3254 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 5 00:01:01.825694 kubelet[3254]: I0905 00:01:01.825361 3254 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.2-n-ff4909b759" Sep 5 00:01:01.840256 kubelet[3254]: W0905 00:01:01.840220 3254 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 5 00:01:01.840401 kubelet[3254]: E0905 00:01:01.840287 3254 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.2-n-ff4909b759\" already exists" pod="kube-system/kube-apiserver-ci-4230.2.2-n-ff4909b759" Sep 5 00:01:01.855946 kubelet[3254]: I0905 00:01:01.855772 3254 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.2.2-n-ff4909b759" podStartSLOduration=0.855757582 podStartE2EDuration="855.757582ms" podCreationTimestamp="2025-09-05 00:01:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:01:01.855505902 +0000 UTC m=+1.233922288" watchObservedRunningTime="2025-09-05 00:01:01.855757582 +0000 UTC m=+1.234173928" Sep 5 00:01:01.886206 kubelet[3254]: I0905 00:01:01.885923 3254 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.2.2-n-ff4909b759" podStartSLOduration=0.885881507 podStartE2EDuration="885.881507ms" podCreationTimestamp="2025-09-05 00:01:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:01:01.873290608 +0000 UTC m=+1.251706994" watchObservedRunningTime="2025-09-05 00:01:01.885881507 +0000 UTC m=+1.264297893" Sep 5 00:01:01.886687 kubelet[3254]: I0905 00:01:01.886146 3254 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.2.2-n-ff4909b759" podStartSLOduration=0.886139547 podStartE2EDuration="886.139547ms" podCreationTimestamp="2025-09-05 00:01:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:01:01.885802786 +0000 UTC m=+1.264219172" watchObservedRunningTime="2025-09-05 00:01:01.886139547 +0000 UTC m=+1.264555933" Sep 5 00:01:02.170699 sudo[2228]: pam_unix(sudo:session): session closed for user root Sep 5 00:01:02.271088 sshd[2227]: Connection closed by 10.200.16.10 port 60112 Sep 5 00:01:02.270973 sshd-session[2225]: pam_unix(sshd:session): session closed for user core Sep 5 00:01:02.275373 systemd-logind[1703]: Session 7 logged out. Waiting for processes to exit. Sep 5 00:01:02.275659 systemd[1]: sshd@4-10.200.20.12:22-10.200.16.10:60112.service: Deactivated successfully. Sep 5 00:01:02.277392 systemd[1]: session-7.scope: Deactivated successfully. Sep 5 00:01:02.277578 systemd[1]: session-7.scope: Consumed 6.117s CPU time, 221.3M memory peak. Sep 5 00:01:02.279130 systemd-logind[1703]: Removed session 7. Sep 5 00:01:05.989970 kubelet[3254]: I0905 00:01:05.989925 3254 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 5 00:01:05.990805 containerd[1723]: time="2025-09-05T00:01:05.990767342Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 5 00:01:05.991385 kubelet[3254]: I0905 00:01:05.990956 3254 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 5 00:01:06.644117 systemd[1]: Created slice kubepods-besteffort-poda54e7e0f_88b2_4e16_9a27_dd9ea4ddaacf.slice - libcontainer container kubepods-besteffort-poda54e7e0f_88b2_4e16_9a27_dd9ea4ddaacf.slice. Sep 5 00:01:06.646772 kubelet[3254]: W0905 00:01:06.646619 3254 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4230.2.2-n-ff4909b759" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.2.2-n-ff4909b759' and this object Sep 5 00:01:06.646772 kubelet[3254]: E0905 00:01:06.646666 3254 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ci-4230.2.2-n-ff4909b759\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.2.2-n-ff4909b759' and this object" logger="UnhandledError" Sep 5 00:01:06.646772 kubelet[3254]: W0905 00:01:06.646703 3254 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4230.2.2-n-ff4909b759" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.2.2-n-ff4909b759' and this object Sep 5 00:01:06.646772 kubelet[3254]: E0905 00:01:06.646712 3254 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4230.2.2-n-ff4909b759\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.2.2-n-ff4909b759' and this object" logger="UnhandledError" Sep 5 00:01:06.647892 kubelet[3254]: I0905 00:01:06.646780 3254 status_manager.go:890] "Failed to get status for pod" podUID="a54e7e0f-88b2-4e16-9a27-dd9ea4ddaacf" pod="kube-system/kube-proxy-vmmt4" err="pods \"kube-proxy-vmmt4\" is forbidden: User \"system:node:ci-4230.2.2-n-ff4909b759\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.2.2-n-ff4909b759' and this object" Sep 5 00:01:06.660246 systemd[1]: Created slice kubepods-burstable-pod03244622_6e18_4de7_bcf1_8731d09294b1.slice - libcontainer container kubepods-burstable-pod03244622_6e18_4de7_bcf1_8731d09294b1.slice. Sep 5 00:01:06.694719 kubelet[3254]: I0905 00:01:06.694439 3254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/03244622-6e18-4de7-bcf1-8731d09294b1-run\") pod \"kube-flannel-ds-hvr7x\" (UID: \"03244622-6e18-4de7-bcf1-8731d09294b1\") " pod="kube-flannel/kube-flannel-ds-hvr7x" Sep 5 00:01:06.694719 kubelet[3254]: I0905 00:01:06.694484 3254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a54e7e0f-88b2-4e16-9a27-dd9ea4ddaacf-kube-proxy\") pod \"kube-proxy-vmmt4\" (UID: \"a54e7e0f-88b2-4e16-9a27-dd9ea4ddaacf\") " pod="kube-system/kube-proxy-vmmt4" Sep 5 00:01:06.694719 kubelet[3254]: I0905 00:01:06.694503 3254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a54e7e0f-88b2-4e16-9a27-dd9ea4ddaacf-lib-modules\") pod \"kube-proxy-vmmt4\" (UID: \"a54e7e0f-88b2-4e16-9a27-dd9ea4ddaacf\") " pod="kube-system/kube-proxy-vmmt4" Sep 5 00:01:06.694719 kubelet[3254]: I0905 00:01:06.694521 3254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pvsp\" (UniqueName: \"kubernetes.io/projected/03244622-6e18-4de7-bcf1-8731d09294b1-kube-api-access-5pvsp\") pod \"kube-flannel-ds-hvr7x\" (UID: \"03244622-6e18-4de7-bcf1-8731d09294b1\") " pod="kube-flannel/kube-flannel-ds-hvr7x" Sep 5 00:01:06.694719 kubelet[3254]: I0905 00:01:06.694543 3254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/03244622-6e18-4de7-bcf1-8731d09294b1-cni-plugin\") pod \"kube-flannel-ds-hvr7x\" (UID: \"03244622-6e18-4de7-bcf1-8731d09294b1\") " pod="kube-flannel/kube-flannel-ds-hvr7x" Sep 5 00:01:06.694942 kubelet[3254]: I0905 00:01:06.694559 3254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/03244622-6e18-4de7-bcf1-8731d09294b1-xtables-lock\") pod \"kube-flannel-ds-hvr7x\" (UID: \"03244622-6e18-4de7-bcf1-8731d09294b1\") " pod="kube-flannel/kube-flannel-ds-hvr7x" Sep 5 00:01:06.694942 kubelet[3254]: I0905 00:01:06.694576 3254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/03244622-6e18-4de7-bcf1-8731d09294b1-flannel-cfg\") pod \"kube-flannel-ds-hvr7x\" (UID: \"03244622-6e18-4de7-bcf1-8731d09294b1\") " pod="kube-flannel/kube-flannel-ds-hvr7x" Sep 5 00:01:06.694942 kubelet[3254]: I0905 00:01:06.694592 3254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a54e7e0f-88b2-4e16-9a27-dd9ea4ddaacf-xtables-lock\") pod \"kube-proxy-vmmt4\" (UID: \"a54e7e0f-88b2-4e16-9a27-dd9ea4ddaacf\") " pod="kube-system/kube-proxy-vmmt4" Sep 5 00:01:06.694942 kubelet[3254]: I0905 00:01:06.694607 3254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b76nn\" (UniqueName: \"kubernetes.io/projected/a54e7e0f-88b2-4e16-9a27-dd9ea4ddaacf-kube-api-access-b76nn\") pod \"kube-proxy-vmmt4\" (UID: \"a54e7e0f-88b2-4e16-9a27-dd9ea4ddaacf\") " pod="kube-system/kube-proxy-vmmt4" Sep 5 00:01:06.694942 kubelet[3254]: I0905 00:01:06.694623 3254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/03244622-6e18-4de7-bcf1-8731d09294b1-cni\") pod \"kube-flannel-ds-hvr7x\" (UID: \"03244622-6e18-4de7-bcf1-8731d09294b1\") " pod="kube-flannel/kube-flannel-ds-hvr7x" Sep 5 00:01:06.804794 kubelet[3254]: E0905 00:01:06.804745 3254 projected.go:288] Couldn't get configMap kube-flannel/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 5 00:01:06.804794 kubelet[3254]: E0905 00:01:06.804784 3254 projected.go:194] Error preparing data for projected volume kube-api-access-5pvsp for pod kube-flannel/kube-flannel-ds-hvr7x: configmap "kube-root-ca.crt" not found Sep 5 00:01:06.805040 kubelet[3254]: E0905 00:01:06.804869 3254 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/03244622-6e18-4de7-bcf1-8731d09294b1-kube-api-access-5pvsp podName:03244622-6e18-4de7-bcf1-8731d09294b1 nodeName:}" failed. No retries permitted until 2025-09-05 00:01:07.304838954 +0000 UTC m=+6.683255300 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5pvsp" (UniqueName: "kubernetes.io/projected/03244622-6e18-4de7-bcf1-8731d09294b1-kube-api-access-5pvsp") pod "kube-flannel-ds-hvr7x" (UID: "03244622-6e18-4de7-bcf1-8731d09294b1") : configmap "kube-root-ca.crt" not found Sep 5 00:01:07.566205 containerd[1723]: time="2025-09-05T00:01:07.566157660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-hvr7x,Uid:03244622-6e18-4de7-bcf1-8731d09294b1,Namespace:kube-flannel,Attempt:0,}" Sep 5 00:01:07.636359 containerd[1723]: time="2025-09-05T00:01:07.635905587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:01:07.636359 containerd[1723]: time="2025-09-05T00:01:07.635982867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:01:07.636359 containerd[1723]: time="2025-09-05T00:01:07.635998347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:01:07.636359 containerd[1723]: time="2025-09-05T00:01:07.636200387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:01:07.665245 systemd[1]: Started cri-containerd-47615387edc1070754bc223b97a3f876a89e2ed62bd90f1887fc3e4212d9dfaf.scope - libcontainer container 47615387edc1070754bc223b97a3f876a89e2ed62bd90f1887fc3e4212d9dfaf. Sep 5 00:01:07.694623 containerd[1723]: time="2025-09-05T00:01:07.694555779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-hvr7x,Uid:03244622-6e18-4de7-bcf1-8731d09294b1,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"47615387edc1070754bc223b97a3f876a89e2ed62bd90f1887fc3e4212d9dfaf\"" Sep 5 00:01:07.699823 containerd[1723]: time="2025-09-05T00:01:07.699446466Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Sep 5 00:01:07.795575 kubelet[3254]: E0905 00:01:07.795540 3254 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Sep 5 00:01:07.796573 kubelet[3254]: E0905 00:01:07.796506 3254 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a54e7e0f-88b2-4e16-9a27-dd9ea4ddaacf-kube-proxy podName:a54e7e0f-88b2-4e16-9a27-dd9ea4ddaacf nodeName:}" failed. No retries permitted until 2025-09-05 00:01:08.296482106 +0000 UTC m=+7.674898492 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/a54e7e0f-88b2-4e16-9a27-dd9ea4ddaacf-kube-proxy") pod "kube-proxy-vmmt4" (UID: "a54e7e0f-88b2-4e16-9a27-dd9ea4ddaacf") : failed to sync configmap cache: timed out waiting for the condition Sep 5 00:01:08.401561 systemd[1]: run-containerd-runc-k8s.io-47615387edc1070754bc223b97a3f876a89e2ed62bd90f1887fc3e4212d9dfaf-runc.UDVoft.mount: Deactivated successfully. Sep 5 00:01:08.452587 containerd[1723]: time="2025-09-05T00:01:08.452522281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vmmt4,Uid:a54e7e0f-88b2-4e16-9a27-dd9ea4ddaacf,Namespace:kube-system,Attempt:0,}" Sep 5 00:01:08.503600 containerd[1723]: time="2025-09-05T00:01:08.503465305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:01:08.503600 containerd[1723]: time="2025-09-05T00:01:08.503545065Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:01:08.503600 containerd[1723]: time="2025-09-05T00:01:08.503568025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:01:08.504261 containerd[1723]: time="2025-09-05T00:01:08.504191585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:01:08.528207 systemd[1]: Started cri-containerd-61c80aa8b936820f4345b7adefddeb2ab23f0c942ebb6b8b0b9ba7531d7c4ce4.scope - libcontainer container 61c80aa8b936820f4345b7adefddeb2ab23f0c942ebb6b8b0b9ba7531d7c4ce4. Sep 5 00:01:08.549818 containerd[1723]: time="2025-09-05T00:01:08.549775442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vmmt4,Uid:a54e7e0f-88b2-4e16-9a27-dd9ea4ddaacf,Namespace:kube-system,Attempt:0,} returns sandbox id \"61c80aa8b936820f4345b7adefddeb2ab23f0c942ebb6b8b0b9ba7531d7c4ce4\"" Sep 5 00:01:08.554869 containerd[1723]: time="2025-09-05T00:01:08.554830128Z" level=info msg="CreateContainer within sandbox \"61c80aa8b936820f4345b7adefddeb2ab23f0c942ebb6b8b0b9ba7531d7c4ce4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 5 00:01:08.597430 containerd[1723]: time="2025-09-05T00:01:08.597381221Z" level=info msg="CreateContainer within sandbox \"61c80aa8b936820f4345b7adefddeb2ab23f0c942ebb6b8b0b9ba7531d7c4ce4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9eca29e4c50d0f180f40df1f8153f1bf8fc0beda5d278f5a900e3821142cab24\"" Sep 5 00:01:08.598794 containerd[1723]: time="2025-09-05T00:01:08.598712703Z" level=info msg="StartContainer for \"9eca29e4c50d0f180f40df1f8153f1bf8fc0beda5d278f5a900e3821142cab24\"" Sep 5 00:01:08.630192 systemd[1]: Started cri-containerd-9eca29e4c50d0f180f40df1f8153f1bf8fc0beda5d278f5a900e3821142cab24.scope - libcontainer container 9eca29e4c50d0f180f40df1f8153f1bf8fc0beda5d278f5a900e3821142cab24. Sep 5 00:01:08.671660 containerd[1723]: time="2025-09-05T00:01:08.670586232Z" level=info msg="StartContainer for \"9eca29e4c50d0f180f40df1f8153f1bf8fc0beda5d278f5a900e3821142cab24\" returns successfully" Sep 5 00:01:09.402511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3078713430.mount: Deactivated successfully. Sep 5 00:01:09.759966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1724289400.mount: Deactivated successfully. Sep 5 00:01:09.858364 containerd[1723]: time="2025-09-05T00:01:09.858292188Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:01:09.861984 containerd[1723]: time="2025-09-05T00:01:09.861936673Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673531" Sep 5 00:01:09.866187 containerd[1723]: time="2025-09-05T00:01:09.866129238Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:01:09.871040 containerd[1723]: time="2025-09-05T00:01:09.870976404Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:01:09.871812 containerd[1723]: time="2025-09-05T00:01:09.871663925Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 2.171385338s" Sep 5 00:01:09.871812 containerd[1723]: time="2025-09-05T00:01:09.871701525Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Sep 5 00:01:09.875527 containerd[1723]: time="2025-09-05T00:01:09.875491569Z" level=info msg="CreateContainer within sandbox \"47615387edc1070754bc223b97a3f876a89e2ed62bd90f1887fc3e4212d9dfaf\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Sep 5 00:01:09.922145 containerd[1723]: time="2025-09-05T00:01:09.922045987Z" level=info msg="CreateContainer within sandbox \"47615387edc1070754bc223b97a3f876a89e2ed62bd90f1887fc3e4212d9dfaf\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"285b019fde0e47f13613d92fb4233814120927eaa8d80d8850358a8b09523d5a\"" Sep 5 00:01:09.924050 containerd[1723]: time="2025-09-05T00:01:09.922568908Z" level=info msg="StartContainer for \"285b019fde0e47f13613d92fb4233814120927eaa8d80d8850358a8b09523d5a\"" Sep 5 00:01:09.948173 systemd[1]: Started cri-containerd-285b019fde0e47f13613d92fb4233814120927eaa8d80d8850358a8b09523d5a.scope - libcontainer container 285b019fde0e47f13613d92fb4233814120927eaa8d80d8850358a8b09523d5a. Sep 5 00:01:09.970219 systemd[1]: cri-containerd-285b019fde0e47f13613d92fb4233814120927eaa8d80d8850358a8b09523d5a.scope: Deactivated successfully. Sep 5 00:01:09.975647 containerd[1723]: time="2025-09-05T00:01:09.975610094Z" level=info msg="StartContainer for \"285b019fde0e47f13613d92fb4233814120927eaa8d80d8850358a8b09523d5a\" returns successfully" Sep 5 00:01:10.027942 containerd[1723]: time="2025-09-05T00:01:10.027889919Z" level=info msg="shim disconnected" id=285b019fde0e47f13613d92fb4233814120927eaa8d80d8850358a8b09523d5a namespace=k8s.io Sep 5 00:01:10.028455 containerd[1723]: time="2025-09-05T00:01:10.028251599Z" level=warning msg="cleaning up after shim disconnected" id=285b019fde0e47f13613d92fb4233814120927eaa8d80d8850358a8b09523d5a namespace=k8s.io Sep 5 00:01:10.028455 containerd[1723]: time="2025-09-05T00:01:10.028287199Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:01:10.401803 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-285b019fde0e47f13613d92fb4233814120927eaa8d80d8850358a8b09523d5a-rootfs.mount: Deactivated successfully. Sep 5 00:01:10.841194 containerd[1723]: time="2025-09-05T00:01:10.841152169Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Sep 5 00:01:10.858239 kubelet[3254]: I0905 00:01:10.858166 3254 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vmmt4" podStartSLOduration=4.85814791 podStartE2EDuration="4.85814791s" podCreationTimestamp="2025-09-05 00:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:01:08.849842375 +0000 UTC m=+8.228258761" watchObservedRunningTime="2025-09-05 00:01:10.85814791 +0000 UTC m=+10.236564296" Sep 5 00:01:13.894436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2046863234.mount: Deactivated successfully. Sep 5 00:01:14.955533 containerd[1723]: time="2025-09-05T00:01:14.954405953Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:01:14.958145 containerd[1723]: time="2025-09-05T00:01:14.958070797Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" Sep 5 00:01:14.962806 containerd[1723]: time="2025-09-05T00:01:14.962731363Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:01:14.968857 containerd[1723]: time="2025-09-05T00:01:14.968808850Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:01:14.970048 containerd[1723]: time="2025-09-05T00:01:14.969598451Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 4.128406362s" Sep 5 00:01:14.970048 containerd[1723]: time="2025-09-05T00:01:14.969631611Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Sep 5 00:01:14.973228 containerd[1723]: time="2025-09-05T00:01:14.973180296Z" level=info msg="CreateContainer within sandbox \"47615387edc1070754bc223b97a3f876a89e2ed62bd90f1887fc3e4212d9dfaf\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 5 00:01:14.997801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1310495275.mount: Deactivated successfully. Sep 5 00:01:15.015820 containerd[1723]: time="2025-09-05T00:01:15.015718107Z" level=info msg="CreateContainer within sandbox \"47615387edc1070754bc223b97a3f876a89e2ed62bd90f1887fc3e4212d9dfaf\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2bc5256bdddb5555654484181ab6c7a48b47210dac1629d18502a46a6c94127a\"" Sep 5 00:01:15.016677 containerd[1723]: time="2025-09-05T00:01:15.016533468Z" level=info msg="StartContainer for \"2bc5256bdddb5555654484181ab6c7a48b47210dac1629d18502a46a6c94127a\"" Sep 5 00:01:15.045294 systemd[1]: Started cri-containerd-2bc5256bdddb5555654484181ab6c7a48b47210dac1629d18502a46a6c94127a.scope - libcontainer container 2bc5256bdddb5555654484181ab6c7a48b47210dac1629d18502a46a6c94127a. Sep 5 00:01:15.068999 systemd[1]: cri-containerd-2bc5256bdddb5555654484181ab6c7a48b47210dac1629d18502a46a6c94127a.scope: Deactivated successfully. Sep 5 00:01:15.075895 containerd[1723]: time="2025-09-05T00:01:15.075667980Z" level=info msg="StartContainer for \"2bc5256bdddb5555654484181ab6c7a48b47210dac1629d18502a46a6c94127a\" returns successfully" Sep 5 00:01:15.093357 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2bc5256bdddb5555654484181ab6c7a48b47210dac1629d18502a46a6c94127a-rootfs.mount: Deactivated successfully. Sep 5 00:01:15.150504 kubelet[3254]: I0905 00:01:15.149707 3254 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 5 00:01:15.195576 systemd[1]: Created slice kubepods-burstable-pod1630225e_db47_4f64_9088_f900e6aeafd9.slice - libcontainer container kubepods-burstable-pod1630225e_db47_4f64_9088_f900e6aeafd9.slice. Sep 5 00:01:15.202959 systemd[1]: Created slice kubepods-burstable-pod3c362ded_12f7_49b2_a247_dac6ed389f54.slice - libcontainer container kubepods-burstable-pod3c362ded_12f7_49b2_a247_dac6ed389f54.slice. Sep 5 00:01:15.248417 kubelet[3254]: I0905 00:01:15.248154 3254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf5vd\" (UniqueName: \"kubernetes.io/projected/1630225e-db47-4f64-9088-f900e6aeafd9-kube-api-access-jf5vd\") pod \"coredns-668d6bf9bc-wl6kv\" (UID: \"1630225e-db47-4f64-9088-f900e6aeafd9\") " pod="kube-system/coredns-668d6bf9bc-wl6kv" Sep 5 00:01:15.248417 kubelet[3254]: I0905 00:01:15.248216 3254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q65xj\" (UniqueName: \"kubernetes.io/projected/3c362ded-12f7-49b2-a247-dac6ed389f54-kube-api-access-q65xj\") pod \"coredns-668d6bf9bc-d9ph8\" (UID: \"3c362ded-12f7-49b2-a247-dac6ed389f54\") " pod="kube-system/coredns-668d6bf9bc-d9ph8" Sep 5 00:01:15.248417 kubelet[3254]: I0905 00:01:15.248235 3254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1630225e-db47-4f64-9088-f900e6aeafd9-config-volume\") pod \"coredns-668d6bf9bc-wl6kv\" (UID: \"1630225e-db47-4f64-9088-f900e6aeafd9\") " pod="kube-system/coredns-668d6bf9bc-wl6kv" Sep 5 00:01:15.248417 kubelet[3254]: I0905 00:01:15.248254 3254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c362ded-12f7-49b2-a247-dac6ed389f54-config-volume\") pod \"coredns-668d6bf9bc-d9ph8\" (UID: \"3c362ded-12f7-49b2-a247-dac6ed389f54\") " pod="kube-system/coredns-668d6bf9bc-d9ph8" Sep 5 00:01:15.500183 containerd[1723]: time="2025-09-05T00:01:15.500064614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wl6kv,Uid:1630225e-db47-4f64-9088-f900e6aeafd9,Namespace:kube-system,Attempt:0,}" Sep 5 00:01:15.506854 containerd[1723]: time="2025-09-05T00:01:15.506542702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d9ph8,Uid:3c362ded-12f7-49b2-a247-dac6ed389f54,Namespace:kube-system,Attempt:0,}" Sep 5 00:01:15.572369 containerd[1723]: time="2025-09-05T00:01:15.572210141Z" level=info msg="shim disconnected" id=2bc5256bdddb5555654484181ab6c7a48b47210dac1629d18502a46a6c94127a namespace=k8s.io Sep 5 00:01:15.572542 containerd[1723]: time="2025-09-05T00:01:15.572362901Z" level=warning msg="cleaning up after shim disconnected" id=2bc5256bdddb5555654484181ab6c7a48b47210dac1629d18502a46a6c94127a namespace=k8s.io Sep 5 00:01:15.572542 containerd[1723]: time="2025-09-05T00:01:15.572430622Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:01:15.658544 containerd[1723]: time="2025-09-05T00:01:15.658376566Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wl6kv,Uid:1630225e-db47-4f64-9088-f900e6aeafd9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"950a47a3ecb915c7df3234115d0d929907f69e90f9560f04fc279fd5c89a8209\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Sep 5 00:01:15.658677 kubelet[3254]: E0905 00:01:15.658632 3254 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"950a47a3ecb915c7df3234115d0d929907f69e90f9560f04fc279fd5c89a8209\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Sep 5 00:01:15.658718 kubelet[3254]: E0905 00:01:15.658697 3254 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"950a47a3ecb915c7df3234115d0d929907f69e90f9560f04fc279fd5c89a8209\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-wl6kv" Sep 5 00:01:15.658742 kubelet[3254]: E0905 00:01:15.658717 3254 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"950a47a3ecb915c7df3234115d0d929907f69e90f9560f04fc279fd5c89a8209\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-wl6kv" Sep 5 00:01:15.658803 kubelet[3254]: E0905 00:01:15.658767 3254 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wl6kv_kube-system(1630225e-db47-4f64-9088-f900e6aeafd9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wl6kv_kube-system(1630225e-db47-4f64-9088-f900e6aeafd9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"950a47a3ecb915c7df3234115d0d929907f69e90f9560f04fc279fd5c89a8209\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-wl6kv" podUID="1630225e-db47-4f64-9088-f900e6aeafd9" Sep 5 00:01:15.665166 containerd[1723]: time="2025-09-05T00:01:15.665112734Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d9ph8,Uid:3c362ded-12f7-49b2-a247-dac6ed389f54,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ac88e4cc3937096784309e7a275e9d7b9d7632caafcf841c1ba6b9c7dc5ec8db\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Sep 5 00:01:15.665372 kubelet[3254]: E0905 00:01:15.665334 3254 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac88e4cc3937096784309e7a275e9d7b9d7632caafcf841c1ba6b9c7dc5ec8db\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Sep 5 00:01:15.665440 kubelet[3254]: E0905 00:01:15.665398 3254 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac88e4cc3937096784309e7a275e9d7b9d7632caafcf841c1ba6b9c7dc5ec8db\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-d9ph8" Sep 5 00:01:15.665440 kubelet[3254]: E0905 00:01:15.665417 3254 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac88e4cc3937096784309e7a275e9d7b9d7632caafcf841c1ba6b9c7dc5ec8db\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-d9ph8" Sep 5 00:01:15.665490 kubelet[3254]: E0905 00:01:15.665450 3254 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-d9ph8_kube-system(3c362ded-12f7-49b2-a247-dac6ed389f54)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-d9ph8_kube-system(3c362ded-12f7-49b2-a247-dac6ed389f54)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac88e4cc3937096784309e7a275e9d7b9d7632caafcf841c1ba6b9c7dc5ec8db\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-d9ph8" podUID="3c362ded-12f7-49b2-a247-dac6ed389f54" Sep 5 00:01:15.854506 containerd[1723]: time="2025-09-05T00:01:15.854040563Z" level=info msg="CreateContainer within sandbox \"47615387edc1070754bc223b97a3f876a89e2ed62bd90f1887fc3e4212d9dfaf\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Sep 5 00:01:15.899489 containerd[1723]: time="2025-09-05T00:01:15.899435578Z" level=info msg="CreateContainer within sandbox \"47615387edc1070754bc223b97a3f876a89e2ed62bd90f1887fc3e4212d9dfaf\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"078830e53764fa75c6446bdcf68e1b0f660d2b9c3bbe978e72196061b2895c75\"" Sep 5 00:01:15.901102 containerd[1723]: time="2025-09-05T00:01:15.900277219Z" level=info msg="StartContainer for \"078830e53764fa75c6446bdcf68e1b0f660d2b9c3bbe978e72196061b2895c75\"" Sep 5 00:01:15.924169 systemd[1]: Started cri-containerd-078830e53764fa75c6446bdcf68e1b0f660d2b9c3bbe978e72196061b2895c75.scope - libcontainer container 078830e53764fa75c6446bdcf68e1b0f660d2b9c3bbe978e72196061b2895c75. Sep 5 00:01:15.951273 containerd[1723]: time="2025-09-05T00:01:15.951225320Z" level=info msg="StartContainer for \"078830e53764fa75c6446bdcf68e1b0f660d2b9c3bbe978e72196061b2895c75\" returns successfully" Sep 5 00:01:16.873792 kubelet[3254]: I0905 00:01:16.873498 3254 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-hvr7x" podStartSLOduration=3.600428369 podStartE2EDuration="10.873481478s" podCreationTimestamp="2025-09-05 00:01:06 +0000 UTC" firstStartedPulling="2025-09-05 00:01:07.697614943 +0000 UTC m=+7.076031329" lastFinishedPulling="2025-09-05 00:01:14.970668052 +0000 UTC m=+14.349084438" observedRunningTime="2025-09-05 00:01:16.870082394 +0000 UTC m=+16.248498780" watchObservedRunningTime="2025-09-05 00:01:16.873481478 +0000 UTC m=+16.251897864" Sep 5 00:01:17.249091 systemd-networkd[1457]: flannel.1: Link UP Sep 5 00:01:17.249098 systemd-networkd[1457]: flannel.1: Gained carrier Sep 5 00:01:18.563207 systemd-networkd[1457]: flannel.1: Gained IPv6LL Sep 5 00:01:27.769715 containerd[1723]: time="2025-09-05T00:01:27.769665858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d9ph8,Uid:3c362ded-12f7-49b2-a247-dac6ed389f54,Namespace:kube-system,Attempt:0,}" Sep 5 00:01:27.810215 systemd-networkd[1457]: cni0: Link UP Sep 5 00:01:27.810228 systemd-networkd[1457]: cni0: Gained carrier Sep 5 00:01:27.812102 systemd-networkd[1457]: cni0: Lost carrier Sep 5 00:01:27.853143 systemd-networkd[1457]: vethb41f167f: Link UP Sep 5 00:01:27.862294 kernel: cni0: port 1(vethb41f167f) entered blocking state Sep 5 00:01:27.862418 kernel: cni0: port 1(vethb41f167f) entered disabled state Sep 5 00:01:27.867167 kernel: vethb41f167f: entered allmulticast mode Sep 5 00:01:27.872625 kernel: vethb41f167f: entered promiscuous mode Sep 5 00:01:27.878042 kernel: cni0: port 1(vethb41f167f) entered blocking state Sep 5 00:01:27.878147 kernel: cni0: port 1(vethb41f167f) entered forwarding state Sep 5 00:01:27.885642 kernel: cni0: port 1(vethb41f167f) entered disabled state Sep 5 00:01:27.899747 kernel: cni0: port 1(vethb41f167f) entered blocking state Sep 5 00:01:27.899857 kernel: cni0: port 1(vethb41f167f) entered forwarding state Sep 5 00:01:27.899702 systemd-networkd[1457]: vethb41f167f: Gained carrier Sep 5 00:01:27.900862 systemd-networkd[1457]: cni0: Gained carrier Sep 5 00:01:27.902947 containerd[1723]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000928e8), "name":"cbr0", "type":"bridge"} Sep 5 00:01:27.902947 containerd[1723]: delegateAdd: netconf sent to delegate plugin: Sep 5 00:01:27.926675 containerd[1723]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-09-05T00:01:27.926166604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:01:27.926675 containerd[1723]: time="2025-09-05T00:01:27.926553084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:01:27.926675 containerd[1723]: time="2025-09-05T00:01:27.926593204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:01:27.927042 containerd[1723]: time="2025-09-05T00:01:27.926905205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:01:27.943958 systemd[1]: run-containerd-runc-k8s.io-71ef939c72050cfe9207490d1931a9c8949e22423c63b2a8c1e9a222b945b1c2-runc.uPlJ8V.mount: Deactivated successfully. Sep 5 00:01:27.951184 systemd[1]: Started cri-containerd-71ef939c72050cfe9207490d1931a9c8949e22423c63b2a8c1e9a222b945b1c2.scope - libcontainer container 71ef939c72050cfe9207490d1931a9c8949e22423c63b2a8c1e9a222b945b1c2. Sep 5 00:01:27.979744 containerd[1723]: time="2025-09-05T00:01:27.979693027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d9ph8,Uid:3c362ded-12f7-49b2-a247-dac6ed389f54,Namespace:kube-system,Attempt:0,} returns sandbox id \"71ef939c72050cfe9207490d1931a9c8949e22423c63b2a8c1e9a222b945b1c2\"" Sep 5 00:01:27.982786 containerd[1723]: time="2025-09-05T00:01:27.982744911Z" level=info msg="CreateContainer within sandbox \"71ef939c72050cfe9207490d1931a9c8949e22423c63b2a8c1e9a222b945b1c2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 00:01:28.027977 containerd[1723]: time="2025-09-05T00:01:28.027795924Z" level=info msg="CreateContainer within sandbox \"71ef939c72050cfe9207490d1931a9c8949e22423c63b2a8c1e9a222b945b1c2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3a8630fb5feeedfc3949e7e2104b0edcfb6e8ef7e520c71ec6b79b353caf4a09\"" Sep 5 00:01:28.028526 containerd[1723]: time="2025-09-05T00:01:28.028486285Z" level=info msg="StartContainer for \"3a8630fb5feeedfc3949e7e2104b0edcfb6e8ef7e520c71ec6b79b353caf4a09\"" Sep 5 00:01:28.056260 systemd[1]: Started cri-containerd-3a8630fb5feeedfc3949e7e2104b0edcfb6e8ef7e520c71ec6b79b353caf4a09.scope - libcontainer container 3a8630fb5feeedfc3949e7e2104b0edcfb6e8ef7e520c71ec6b79b353caf4a09. Sep 5 00:01:28.087400 containerd[1723]: time="2025-09-05T00:01:28.087264355Z" level=info msg="StartContainer for \"3a8630fb5feeedfc3949e7e2104b0edcfb6e8ef7e520c71ec6b79b353caf4a09\" returns successfully" Sep 5 00:01:28.896853 kubelet[3254]: I0905 00:01:28.896780 3254 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-d9ph8" podStartSLOduration=21.896762748 podStartE2EDuration="21.896762748s" podCreationTimestamp="2025-09-05 00:01:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:01:28.896761308 +0000 UTC m=+28.275177654" watchObservedRunningTime="2025-09-05 00:01:28.896762748 +0000 UTC m=+28.275179134" Sep 5 00:01:29.251166 systemd-networkd[1457]: cni0: Gained IPv6LL Sep 5 00:01:29.571198 systemd-networkd[1457]: vethb41f167f: Gained IPv6LL Sep 5 00:01:29.769347 containerd[1723]: time="2025-09-05T00:01:29.768963227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wl6kv,Uid:1630225e-db47-4f64-9088-f900e6aeafd9,Namespace:kube-system,Attempt:0,}" Sep 5 00:01:29.811654 systemd-networkd[1457]: vetha0e872a7: Link UP Sep 5 00:01:29.824121 kernel: cni0: port 2(vetha0e872a7) entered blocking state Sep 5 00:01:29.824283 kernel: cni0: port 2(vetha0e872a7) entered disabled state Sep 5 00:01:29.828069 kernel: vetha0e872a7: entered allmulticast mode Sep 5 00:01:29.831720 kernel: vetha0e872a7: entered promiscuous mode Sep 5 00:01:29.840058 kernel: cni0: port 2(vetha0e872a7) entered blocking state Sep 5 00:01:29.840129 kernel: cni0: port 2(vetha0e872a7) entered forwarding state Sep 5 00:01:29.845113 systemd-networkd[1457]: vetha0e872a7: Gained carrier Sep 5 00:01:29.846526 containerd[1723]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000928e8), "name":"cbr0", "type":"bridge"} Sep 5 00:01:29.846526 containerd[1723]: delegateAdd: netconf sent to delegate plugin: Sep 5 00:01:29.880181 containerd[1723]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-09-05T00:01:29.879674053Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:01:29.880181 containerd[1723]: time="2025-09-05T00:01:29.879819134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:01:29.880181 containerd[1723]: time="2025-09-05T00:01:29.879835614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:01:29.880181 containerd[1723]: time="2025-09-05T00:01:29.879930654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:01:29.904202 systemd[1]: Started cri-containerd-e79c99550b29c85a3fe90e28dbb8c05e7ce093031d68524d7078cfe0399edbb6.scope - libcontainer container e79c99550b29c85a3fe90e28dbb8c05e7ce093031d68524d7078cfe0399edbb6. Sep 5 00:01:29.932604 containerd[1723]: time="2025-09-05T00:01:29.932562224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wl6kv,Uid:1630225e-db47-4f64-9088-f900e6aeafd9,Namespace:kube-system,Attempt:0,} returns sandbox id \"e79c99550b29c85a3fe90e28dbb8c05e7ce093031d68524d7078cfe0399edbb6\"" Sep 5 00:01:29.936271 containerd[1723]: time="2025-09-05T00:01:29.936233708Z" level=info msg="CreateContainer within sandbox \"e79c99550b29c85a3fe90e28dbb8c05e7ce093031d68524d7078cfe0399edbb6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 00:01:29.971296 containerd[1723]: time="2025-09-05T00:01:29.971251222Z" level=info msg="CreateContainer within sandbox \"e79c99550b29c85a3fe90e28dbb8c05e7ce093031d68524d7078cfe0399edbb6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4380446385f329f549bc7b8b55675dfabe6ef747ac4c1dc8eb70a5fbeb40feb8\"" Sep 5 00:01:29.971958 containerd[1723]: time="2025-09-05T00:01:29.971927662Z" level=info msg="StartContainer for \"4380446385f329f549bc7b8b55675dfabe6ef747ac4c1dc8eb70a5fbeb40feb8\"" Sep 5 00:01:29.994185 systemd[1]: Started cri-containerd-4380446385f329f549bc7b8b55675dfabe6ef747ac4c1dc8eb70a5fbeb40feb8.scope - libcontainer container 4380446385f329f549bc7b8b55675dfabe6ef747ac4c1dc8eb70a5fbeb40feb8. Sep 5 00:01:30.027931 containerd[1723]: time="2025-09-05T00:01:30.027885796Z" level=info msg="StartContainer for \"4380446385f329f549bc7b8b55675dfabe6ef747ac4c1dc8eb70a5fbeb40feb8\" returns successfully" Sep 5 00:01:30.895746 kubelet[3254]: I0905 00:01:30.895687 3254 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wl6kv" podStartSLOduration=23.895669151 podStartE2EDuration="23.895669151s" podCreationTimestamp="2025-09-05 00:01:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:01:30.89553127 +0000 UTC m=+30.273947656" watchObservedRunningTime="2025-09-05 00:01:30.895669151 +0000 UTC m=+30.274085537" Sep 5 00:01:30.979166 systemd-networkd[1457]: vetha0e872a7: Gained IPv6LL Sep 5 00:02:35.129293 systemd[1]: Started sshd@5-10.200.20.12:22-10.200.16.10:39502.service - OpenSSH per-connection server daemon (10.200.16.10:39502). Sep 5 00:02:35.645337 sshd[4445]: Accepted publickey for core from 10.200.16.10 port 39502 ssh2: RSA SHA256:aqxfi0PdaFlGLxH6dFeos6aMvDcgZd8ZfY1D2irauCI Sep 5 00:02:35.646652 sshd-session[4445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:02:35.652040 systemd-logind[1703]: New session 8 of user core. Sep 5 00:02:35.655221 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 5 00:02:36.103557 sshd[4447]: Connection closed by 10.200.16.10 port 39502 Sep 5 00:02:36.104156 sshd-session[4445]: pam_unix(sshd:session): session closed for user core Sep 5 00:02:36.108067 systemd[1]: sshd@5-10.200.20.12:22-10.200.16.10:39502.service: Deactivated successfully. Sep 5 00:02:36.110575 systemd[1]: session-8.scope: Deactivated successfully. Sep 5 00:02:36.111527 systemd-logind[1703]: Session 8 logged out. Waiting for processes to exit. Sep 5 00:02:36.112431 systemd-logind[1703]: Removed session 8. Sep 5 00:02:41.202258 systemd[1]: Started sshd@6-10.200.20.12:22-10.200.16.10:46198.service - OpenSSH per-connection server daemon (10.200.16.10:46198). Sep 5 00:02:41.692124 sshd[4482]: Accepted publickey for core from 10.200.16.10 port 46198 ssh2: RSA SHA256:aqxfi0PdaFlGLxH6dFeos6aMvDcgZd8ZfY1D2irauCI Sep 5 00:02:41.693403 sshd-session[4482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:02:41.697518 systemd-logind[1703]: New session 9 of user core. Sep 5 00:02:41.704172 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 5 00:02:42.138064 sshd[4484]: Connection closed by 10.200.16.10 port 46198 Sep 5 00:02:42.138781 sshd-session[4482]: pam_unix(sshd:session): session closed for user core Sep 5 00:02:42.141497 systemd-logind[1703]: Session 9 logged out. Waiting for processes to exit. Sep 5 00:02:42.141721 systemd[1]: sshd@6-10.200.20.12:22-10.200.16.10:46198.service: Deactivated successfully. Sep 5 00:02:42.144529 systemd[1]: session-9.scope: Deactivated successfully. Sep 5 00:02:42.146842 systemd-logind[1703]: Removed session 9. Sep 5 00:02:47.233339 systemd[1]: Started sshd@7-10.200.20.12:22-10.200.16.10:46208.service - OpenSSH per-connection server daemon (10.200.16.10:46208). Sep 5 00:02:47.714145 sshd[4518]: Accepted publickey for core from 10.200.16.10 port 46208 ssh2: RSA SHA256:aqxfi0PdaFlGLxH6dFeos6aMvDcgZd8ZfY1D2irauCI Sep 5 00:02:47.715422 sshd-session[4518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:02:47.720282 systemd-logind[1703]: New session 10 of user core. Sep 5 00:02:47.732178 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 5 00:02:48.119832 sshd[4541]: Connection closed by 10.200.16.10 port 46208 Sep 5 00:02:48.120421 sshd-session[4518]: pam_unix(sshd:session): session closed for user core Sep 5 00:02:48.123930 systemd[1]: sshd@7-10.200.20.12:22-10.200.16.10:46208.service: Deactivated successfully. Sep 5 00:02:48.126800 systemd[1]: session-10.scope: Deactivated successfully. Sep 5 00:02:48.127968 systemd-logind[1703]: Session 10 logged out. Waiting for processes to exit. Sep 5 00:02:48.129709 systemd-logind[1703]: Removed session 10. Sep 5 00:02:48.229652 systemd[1]: Started sshd@8-10.200.20.12:22-10.200.16.10:46222.service - OpenSSH per-connection server daemon (10.200.16.10:46222). Sep 5 00:02:48.733095 sshd[4553]: Accepted publickey for core from 10.200.16.10 port 46222 ssh2: RSA SHA256:aqxfi0PdaFlGLxH6dFeos6aMvDcgZd8ZfY1D2irauCI Sep 5 00:02:48.734452 sshd-session[4553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:02:48.739110 systemd-logind[1703]: New session 11 of user core. Sep 5 00:02:48.748168 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 5 00:02:49.189073 sshd[4555]: Connection closed by 10.200.16.10 port 46222 Sep 5 00:02:49.189969 sshd-session[4553]: pam_unix(sshd:session): session closed for user core Sep 5 00:02:49.193697 systemd[1]: sshd@8-10.200.20.12:22-10.200.16.10:46222.service: Deactivated successfully. Sep 5 00:02:49.195997 systemd[1]: session-11.scope: Deactivated successfully. Sep 5 00:02:49.197003 systemd-logind[1703]: Session 11 logged out. Waiting for processes to exit. Sep 5 00:02:49.197924 systemd-logind[1703]: Removed session 11. Sep 5 00:02:49.284848 systemd[1]: Started sshd@9-10.200.20.12:22-10.200.16.10:46226.service - OpenSSH per-connection server daemon (10.200.16.10:46226). Sep 5 00:02:49.762589 sshd[4565]: Accepted publickey for core from 10.200.16.10 port 46226 ssh2: RSA SHA256:aqxfi0PdaFlGLxH6dFeos6aMvDcgZd8ZfY1D2irauCI Sep 5 00:02:49.763876 sshd-session[4565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:02:49.768072 systemd-logind[1703]: New session 12 of user core. Sep 5 00:02:49.771169 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 5 00:02:50.180830 sshd[4567]: Connection closed by 10.200.16.10 port 46226 Sep 5 00:02:50.180730 sshd-session[4565]: pam_unix(sshd:session): session closed for user core Sep 5 00:02:50.183730 systemd-logind[1703]: Session 12 logged out. Waiting for processes to exit. Sep 5 00:02:50.183910 systemd[1]: sshd@9-10.200.20.12:22-10.200.16.10:46226.service: Deactivated successfully. Sep 5 00:02:50.185878 systemd[1]: session-12.scope: Deactivated successfully. Sep 5 00:02:50.189183 systemd-logind[1703]: Removed session 12. Sep 5 00:02:55.275589 systemd[1]: Started sshd@10-10.200.20.12:22-10.200.16.10:54070.service - OpenSSH per-connection server daemon (10.200.16.10:54070). Sep 5 00:02:55.767369 sshd[4601]: Accepted publickey for core from 10.200.16.10 port 54070 ssh2: RSA SHA256:aqxfi0PdaFlGLxH6dFeos6aMvDcgZd8ZfY1D2irauCI Sep 5 00:02:55.768597 sshd-session[4601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:02:55.773070 systemd-logind[1703]: New session 13 of user core. Sep 5 00:02:55.781159 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 5 00:02:56.193585 sshd[4603]: Connection closed by 10.200.16.10 port 54070 Sep 5 00:02:56.194318 sshd-session[4601]: pam_unix(sshd:session): session closed for user core Sep 5 00:02:56.197551 systemd-logind[1703]: Session 13 logged out. Waiting for processes to exit. Sep 5 00:02:56.198228 systemd[1]: sshd@10-10.200.20.12:22-10.200.16.10:54070.service: Deactivated successfully. Sep 5 00:02:56.200507 systemd[1]: session-13.scope: Deactivated successfully. Sep 5 00:02:56.201564 systemd-logind[1703]: Removed session 13. Sep 5 00:02:56.276698 systemd[1]: Started sshd@11-10.200.20.12:22-10.200.16.10:54078.service - OpenSSH per-connection server daemon (10.200.16.10:54078). Sep 5 00:02:56.733512 sshd[4615]: Accepted publickey for core from 10.200.16.10 port 54078 ssh2: RSA SHA256:aqxfi0PdaFlGLxH6dFeos6aMvDcgZd8ZfY1D2irauCI Sep 5 00:02:56.734773 sshd-session[4615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:02:56.740146 systemd-logind[1703]: New session 14 of user core. Sep 5 00:02:56.749172 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 5 00:02:57.241252 sshd[4617]: Connection closed by 10.200.16.10 port 54078 Sep 5 00:02:57.241807 sshd-session[4615]: pam_unix(sshd:session): session closed for user core Sep 5 00:02:57.245287 systemd[1]: sshd@11-10.200.20.12:22-10.200.16.10:54078.service: Deactivated successfully. Sep 5 00:02:57.246900 systemd[1]: session-14.scope: Deactivated successfully. Sep 5 00:02:57.247683 systemd-logind[1703]: Session 14 logged out. Waiting for processes to exit. Sep 5 00:02:57.248577 systemd-logind[1703]: Removed session 14. Sep 5 00:02:57.335363 systemd[1]: Started sshd@12-10.200.20.12:22-10.200.16.10:54094.service - OpenSSH per-connection server daemon (10.200.16.10:54094). Sep 5 00:02:57.788133 sshd[4626]: Accepted publickey for core from 10.200.16.10 port 54094 ssh2: RSA SHA256:aqxfi0PdaFlGLxH6dFeos6aMvDcgZd8ZfY1D2irauCI Sep 5 00:02:57.789400 sshd-session[4626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:02:57.793656 systemd-logind[1703]: New session 15 of user core. Sep 5 00:02:57.798158 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 5 00:02:58.693391 sshd[4649]: Connection closed by 10.200.16.10 port 54094 Sep 5 00:02:58.694104 sshd-session[4626]: pam_unix(sshd:session): session closed for user core Sep 5 00:02:58.697576 systemd[1]: sshd@12-10.200.20.12:22-10.200.16.10:54094.service: Deactivated successfully. Sep 5 00:02:58.700084 systemd[1]: session-15.scope: Deactivated successfully. Sep 5 00:02:58.701277 systemd-logind[1703]: Session 15 logged out. Waiting for processes to exit. Sep 5 00:02:58.702576 systemd-logind[1703]: Removed session 15. Sep 5 00:02:58.793298 systemd[1]: Started sshd@13-10.200.20.12:22-10.200.16.10:54106.service - OpenSSH per-connection server daemon (10.200.16.10:54106). Sep 5 00:02:59.299284 sshd[4666]: Accepted publickey for core from 10.200.16.10 port 54106 ssh2: RSA SHA256:aqxfi0PdaFlGLxH6dFeos6aMvDcgZd8ZfY1D2irauCI Sep 5 00:02:59.307557 sshd-session[4666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:02:59.312086 systemd-logind[1703]: New session 16 of user core. Sep 5 00:02:59.316173 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 5 00:02:59.835072 sshd[4668]: Connection closed by 10.200.16.10 port 54106 Sep 5 00:02:59.835701 sshd-session[4666]: pam_unix(sshd:session): session closed for user core Sep 5 00:02:59.839722 systemd[1]: sshd@13-10.200.20.12:22-10.200.16.10:54106.service: Deactivated successfully. Sep 5 00:02:59.841673 systemd[1]: session-16.scope: Deactivated successfully. Sep 5 00:02:59.842480 systemd-logind[1703]: Session 16 logged out. Waiting for processes to exit. Sep 5 00:02:59.843759 systemd-logind[1703]: Removed session 16. Sep 5 00:02:59.926502 systemd[1]: Started sshd@14-10.200.20.12:22-10.200.16.10:56320.service - OpenSSH per-connection server daemon (10.200.16.10:56320). Sep 5 00:03:00.403025 sshd[4677]: Accepted publickey for core from 10.200.16.10 port 56320 ssh2: RSA SHA256:aqxfi0PdaFlGLxH6dFeos6aMvDcgZd8ZfY1D2irauCI Sep 5 00:03:00.404474 sshd-session[4677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:03:00.408615 systemd-logind[1703]: New session 17 of user core. Sep 5 00:03:00.417168 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 5 00:03:00.819417 sshd[4679]: Connection closed by 10.200.16.10 port 56320 Sep 5 00:03:00.818930 sshd-session[4677]: pam_unix(sshd:session): session closed for user core Sep 5 00:03:00.822154 systemd-logind[1703]: Session 17 logged out. Waiting for processes to exit. Sep 5 00:03:00.822418 systemd[1]: sshd@14-10.200.20.12:22-10.200.16.10:56320.service: Deactivated successfully. Sep 5 00:03:00.824149 systemd[1]: session-17.scope: Deactivated successfully. Sep 5 00:03:00.826461 systemd-logind[1703]: Removed session 17. Sep 5 00:03:05.924347 systemd[1]: Started sshd@15-10.200.20.12:22-10.200.16.10:56330.service - OpenSSH per-connection server daemon (10.200.16.10:56330). Sep 5 00:03:06.441069 sshd[4716]: Accepted publickey for core from 10.200.16.10 port 56330 ssh2: RSA SHA256:aqxfi0PdaFlGLxH6dFeos6aMvDcgZd8ZfY1D2irauCI Sep 5 00:03:06.442093 sshd-session[4716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:03:06.446771 systemd-logind[1703]: New session 18 of user core. Sep 5 00:03:06.451227 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 5 00:03:06.869053 sshd[4718]: Connection closed by 10.200.16.10 port 56330 Sep 5 00:03:06.869634 sshd-session[4716]: pam_unix(sshd:session): session closed for user core Sep 5 00:03:06.872406 systemd-logind[1703]: Session 18 logged out. Waiting for processes to exit. Sep 5 00:03:06.872543 systemd[1]: sshd@15-10.200.20.12:22-10.200.16.10:56330.service: Deactivated successfully. Sep 5 00:03:06.874579 systemd[1]: session-18.scope: Deactivated successfully. Sep 5 00:03:06.876813 systemd-logind[1703]: Removed session 18. Sep 5 00:03:11.954689 systemd[1]: Started sshd@16-10.200.20.12:22-10.200.16.10:59778.service - OpenSSH per-connection server daemon (10.200.16.10:59778). Sep 5 00:03:12.449107 sshd[4752]: Accepted publickey for core from 10.200.16.10 port 59778 ssh2: RSA SHA256:aqxfi0PdaFlGLxH6dFeos6aMvDcgZd8ZfY1D2irauCI Sep 5 00:03:12.450412 sshd-session[4752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:03:12.455263 systemd-logind[1703]: New session 19 of user core. Sep 5 00:03:12.459166 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 5 00:03:12.877829 sshd[4754]: Connection closed by 10.200.16.10 port 59778 Sep 5 00:03:12.877727 sshd-session[4752]: pam_unix(sshd:session): session closed for user core Sep 5 00:03:12.881815 systemd-logind[1703]: Session 19 logged out. Waiting for processes to exit. Sep 5 00:03:12.881915 systemd[1]: sshd@16-10.200.20.12:22-10.200.16.10:59778.service: Deactivated successfully. Sep 5 00:03:12.884494 systemd[1]: session-19.scope: Deactivated successfully. Sep 5 00:03:12.887036 systemd-logind[1703]: Removed session 19. Sep 5 00:03:17.965295 systemd[1]: Started sshd@17-10.200.20.12:22-10.200.16.10:59792.service - OpenSSH per-connection server daemon (10.200.16.10:59792). Sep 5 00:03:18.418839 sshd[4807]: Accepted publickey for core from 10.200.16.10 port 59792 ssh2: RSA SHA256:aqxfi0PdaFlGLxH6dFeos6aMvDcgZd8ZfY1D2irauCI Sep 5 00:03:18.420162 sshd-session[4807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:03:18.425123 systemd-logind[1703]: New session 20 of user core. Sep 5 00:03:18.431318 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 5 00:03:18.822051 sshd[4809]: Connection closed by 10.200.16.10 port 59792 Sep 5 00:03:18.822621 sshd-session[4807]: pam_unix(sshd:session): session closed for user core Sep 5 00:03:18.825267 systemd-logind[1703]: Session 20 logged out. Waiting for processes to exit. Sep 5 00:03:18.825491 systemd[1]: sshd@17-10.200.20.12:22-10.200.16.10:59792.service: Deactivated successfully. Sep 5 00:03:18.827651 systemd[1]: session-20.scope: Deactivated successfully. Sep 5 00:03:18.829404 systemd-logind[1703]: Removed session 20.