Jan 23 00:09:20.076191 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Jan 23 00:09:20.076208 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Thu Jan 22 22:21:53 -00 2026 Jan 23 00:09:20.076215 kernel: KASLR enabled Jan 23 00:09:20.076219 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 23 00:09:20.076222 kernel: printk: legacy bootconsole [pl11] enabled Jan 23 00:09:20.076227 kernel: efi: EFI v2.7 by EDK II Jan 23 00:09:20.076232 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e89c018 RNG=0x3f979998 MEMRESERVE=0x3db83598 Jan 23 00:09:20.076236 kernel: random: crng init done Jan 23 00:09:20.076240 kernel: secureboot: Secure boot disabled Jan 23 00:09:20.076244 kernel: ACPI: Early table checksum verification disabled Jan 23 00:09:20.076248 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Jan 23 00:09:20.076252 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 00:09:20.076256 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 00:09:20.076260 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 23 00:09:20.076266 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 00:09:20.076270 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 00:09:20.076274 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 00:09:20.076279 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 00:09:20.076283 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 00:09:20.076288 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 00:09:20.076292 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 23 00:09:20.076296 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 00:09:20.076300 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 23 00:09:20.076305 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 23 00:09:20.076309 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 23 00:09:20.076313 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Jan 23 00:09:20.076317 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Jan 23 00:09:20.076321 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 23 00:09:20.076325 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 23 00:09:20.076329 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 23 00:09:20.076334 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 23 00:09:20.076338 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 23 00:09:20.076342 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 23 00:09:20.076347 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 23 00:09:20.076351 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 23 00:09:20.076355 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 23 00:09:20.076359 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Jan 23 00:09:20.076363 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Jan 23 00:09:20.076367 kernel: Zone ranges: Jan 23 00:09:20.076372 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 23 00:09:20.076378 kernel: DMA32 empty Jan 23 00:09:20.076383 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 23 00:09:20.076387 kernel: Device empty Jan 23 00:09:20.076391 kernel: Movable zone start for each node Jan 23 00:09:20.076396 kernel: Early memory node ranges Jan 23 00:09:20.076400 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 23 00:09:20.076405 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Jan 23 00:09:20.076410 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Jan 23 00:09:20.076414 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Jan 23 00:09:20.076418 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Jan 23 00:09:20.076422 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Jan 23 00:09:20.076427 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 23 00:09:20.076431 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 23 00:09:20.076436 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 23 00:09:20.076440 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Jan 23 00:09:20.076444 kernel: psci: probing for conduit method from ACPI. Jan 23 00:09:20.076448 kernel: psci: PSCIv1.3 detected in firmware. Jan 23 00:09:20.076453 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 00:09:20.076458 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 23 00:09:20.076462 kernel: psci: SMC Calling Convention v1.4 Jan 23 00:09:20.076467 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 23 00:09:20.076471 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 23 00:09:20.076475 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 23 00:09:20.076480 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 23 00:09:20.076484 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 23 00:09:20.076488 kernel: Detected PIPT I-cache on CPU0 Jan 23 00:09:20.076493 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Jan 23 00:09:20.076497 kernel: CPU features: detected: GIC system register CPU interface Jan 23 00:09:20.076502 kernel: CPU features: detected: Spectre-v4 Jan 23 00:09:20.076506 kernel: CPU features: detected: Spectre-BHB Jan 23 00:09:20.076511 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 23 00:09:20.076516 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 23 00:09:20.076520 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Jan 23 00:09:20.076524 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 23 00:09:20.076529 kernel: alternatives: applying boot alternatives Jan 23 00:09:20.076534 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=38aa0560e146398cb8c3378a56d449784f1c7652139d7b61279d764fcc4c793a Jan 23 00:09:20.076538 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 00:09:20.076543 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 00:09:20.076547 kernel: Fallback order for Node 0: 0 Jan 23 00:09:20.076552 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Jan 23 00:09:20.076556 kernel: Policy zone: Normal Jan 23 00:09:20.076561 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 00:09:20.076565 kernel: software IO TLB: area num 2. Jan 23 00:09:20.076569 kernel: software IO TLB: mapped [mem 0x0000000035900000-0x0000000039900000] (64MB) Jan 23 00:09:20.076574 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 00:09:20.076578 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 00:09:20.076583 kernel: rcu: RCU event tracing is enabled. Jan 23 00:09:20.076588 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 00:09:20.076592 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 00:09:20.076597 kernel: Tracing variant of Tasks RCU enabled. Jan 23 00:09:20.076601 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 00:09:20.076605 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 00:09:20.076611 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 00:09:20.076615 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 00:09:20.076619 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 00:09:20.076624 kernel: GICv3: 960 SPIs implemented Jan 23 00:09:20.076628 kernel: GICv3: 0 Extended SPIs implemented Jan 23 00:09:20.076632 kernel: Root IRQ handler: gic_handle_irq Jan 23 00:09:20.076637 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 23 00:09:20.076641 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Jan 23 00:09:20.076645 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 23 00:09:20.076650 kernel: ITS: No ITS available, not enabling LPIs Jan 23 00:09:20.076654 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 00:09:20.076659 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Jan 23 00:09:20.076664 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 00:09:20.076668 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Jan 23 00:09:20.076673 kernel: Console: colour dummy device 80x25 Jan 23 00:09:20.076677 kernel: printk: legacy console [tty1] enabled Jan 23 00:09:20.076682 kernel: ACPI: Core revision 20240827 Jan 23 00:09:20.076687 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Jan 23 00:09:20.076691 kernel: pid_max: default: 32768 minimum: 301 Jan 23 00:09:20.076696 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 00:09:20.076700 kernel: landlock: Up and running. Jan 23 00:09:20.076705 kernel: SELinux: Initializing. Jan 23 00:09:20.076710 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 00:09:20.076714 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 00:09:20.076719 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Jan 23 00:09:20.076724 kernel: Hyper-V: Host Build 10.0.26102.1172-1-0 Jan 23 00:09:20.076732 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 23 00:09:20.076737 kernel: rcu: Hierarchical SRCU implementation. Jan 23 00:09:20.076742 kernel: rcu: Max phase no-delay instances is 400. Jan 23 00:09:20.076747 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 00:09:20.076751 kernel: Remapping and enabling EFI services. Jan 23 00:09:20.076756 kernel: smp: Bringing up secondary CPUs ... Jan 23 00:09:20.076761 kernel: Detected PIPT I-cache on CPU1 Jan 23 00:09:20.076766 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 23 00:09:20.076771 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Jan 23 00:09:20.076776 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 00:09:20.076780 kernel: SMP: Total of 2 processors activated. Jan 23 00:09:20.076785 kernel: CPU: All CPU(s) started at EL1 Jan 23 00:09:20.076791 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 00:09:20.076796 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 23 00:09:20.076800 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 23 00:09:20.076805 kernel: CPU features: detected: Common not Private translations Jan 23 00:09:20.076810 kernel: CPU features: detected: CRC32 instructions Jan 23 00:09:20.076815 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Jan 23 00:09:20.076819 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 23 00:09:20.076824 kernel: CPU features: detected: LSE atomic instructions Jan 23 00:09:20.076829 kernel: CPU features: detected: Privileged Access Never Jan 23 00:09:20.076834 kernel: CPU features: detected: Speculation barrier (SB) Jan 23 00:09:20.076839 kernel: CPU features: detected: TLB range maintenance instructions Jan 23 00:09:20.076844 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 23 00:09:20.076849 kernel: CPU features: detected: Scalable Vector Extension Jan 23 00:09:20.078891 kernel: alternatives: applying system-wide alternatives Jan 23 00:09:20.078907 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jan 23 00:09:20.078913 kernel: SVE: maximum available vector length 16 bytes per vector Jan 23 00:09:20.078918 kernel: SVE: default vector length 16 bytes per vector Jan 23 00:09:20.078924 kernel: Memory: 3952828K/4194160K available (11200K kernel code, 2458K rwdata, 9088K rodata, 39552K init, 1038K bss, 220144K reserved, 16384K cma-reserved) Jan 23 00:09:20.078934 kernel: devtmpfs: initialized Jan 23 00:09:20.078939 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 00:09:20.078944 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 00:09:20.078949 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 23 00:09:20.078953 kernel: 0 pages in range for non-PLT usage Jan 23 00:09:20.078958 kernel: 508400 pages in range for PLT usage Jan 23 00:09:20.078963 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 00:09:20.078968 kernel: SMBIOS 3.1.0 present. Jan 23 00:09:20.078973 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Jan 23 00:09:20.078979 kernel: DMI: Memory slots populated: 2/2 Jan 23 00:09:20.078984 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 00:09:20.078989 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 23 00:09:20.078993 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 00:09:20.078998 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 00:09:20.079003 kernel: audit: initializing netlink subsys (disabled) Jan 23 00:09:20.079008 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Jan 23 00:09:20.079013 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 00:09:20.079019 kernel: cpuidle: using governor menu Jan 23 00:09:20.079024 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 00:09:20.079028 kernel: ASID allocator initialised with 32768 entries Jan 23 00:09:20.079033 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 00:09:20.079038 kernel: Serial: AMBA PL011 UART driver Jan 23 00:09:20.079043 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 00:09:20.079048 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 00:09:20.079053 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 00:09:20.079057 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 00:09:20.079063 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 00:09:20.079068 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 00:09:20.079073 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 00:09:20.079078 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 00:09:20.079082 kernel: ACPI: Added _OSI(Module Device) Jan 23 00:09:20.079087 kernel: ACPI: Added _OSI(Processor Device) Jan 23 00:09:20.079092 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 00:09:20.079097 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 00:09:20.079102 kernel: ACPI: Interpreter enabled Jan 23 00:09:20.079107 kernel: ACPI: Using GIC for interrupt routing Jan 23 00:09:20.079112 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 23 00:09:20.079117 kernel: printk: legacy console [ttyAMA0] enabled Jan 23 00:09:20.079122 kernel: printk: legacy bootconsole [pl11] disabled Jan 23 00:09:20.079128 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 23 00:09:20.079133 kernel: ACPI: CPU0 has been hot-added Jan 23 00:09:20.079138 kernel: ACPI: CPU1 has been hot-added Jan 23 00:09:20.079143 kernel: iommu: Default domain type: Translated Jan 23 00:09:20.079147 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 00:09:20.079152 kernel: efivars: Registered efivars operations Jan 23 00:09:20.079158 kernel: vgaarb: loaded Jan 23 00:09:20.079163 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 00:09:20.079167 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 00:09:20.079172 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 00:09:20.079177 kernel: pnp: PnP ACPI init Jan 23 00:09:20.079182 kernel: pnp: PnP ACPI: found 0 devices Jan 23 00:09:20.079186 kernel: NET: Registered PF_INET protocol family Jan 23 00:09:20.079191 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 00:09:20.079196 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 00:09:20.079202 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 00:09:20.079207 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 00:09:20.079212 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 00:09:20.079216 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 00:09:20.079221 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 00:09:20.079226 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 00:09:20.079231 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 00:09:20.079236 kernel: PCI: CLS 0 bytes, default 64 Jan 23 00:09:20.079240 kernel: kvm [1]: HYP mode not available Jan 23 00:09:20.079246 kernel: Initialise system trusted keyrings Jan 23 00:09:20.079251 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 00:09:20.079256 kernel: Key type asymmetric registered Jan 23 00:09:20.079260 kernel: Asymmetric key parser 'x509' registered Jan 23 00:09:20.079265 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 23 00:09:20.079270 kernel: io scheduler mq-deadline registered Jan 23 00:09:20.079275 kernel: io scheduler kyber registered Jan 23 00:09:20.079280 kernel: io scheduler bfq registered Jan 23 00:09:20.079284 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 00:09:20.079290 kernel: thunder_xcv, ver 1.0 Jan 23 00:09:20.079295 kernel: thunder_bgx, ver 1.0 Jan 23 00:09:20.079299 kernel: nicpf, ver 1.0 Jan 23 00:09:20.079304 kernel: nicvf, ver 1.0 Jan 23 00:09:20.079430 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 00:09:20.079483 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T00:09:19 UTC (1769126959) Jan 23 00:09:20.079490 kernel: efifb: probing for efifb Jan 23 00:09:20.079496 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 23 00:09:20.079501 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 23 00:09:20.079506 kernel: efifb: scrolling: redraw Jan 23 00:09:20.079510 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 23 00:09:20.079515 kernel: Console: switching to colour frame buffer device 128x48 Jan 23 00:09:20.079520 kernel: fb0: EFI VGA frame buffer device Jan 23 00:09:20.079525 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 23 00:09:20.079530 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 00:09:20.079534 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jan 23 00:09:20.079540 kernel: watchdog: NMI not fully supported Jan 23 00:09:20.079545 kernel: watchdog: Hard watchdog permanently disabled Jan 23 00:09:20.079550 kernel: NET: Registered PF_INET6 protocol family Jan 23 00:09:20.079554 kernel: Segment Routing with IPv6 Jan 23 00:09:20.079559 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 00:09:20.079564 kernel: NET: Registered PF_PACKET protocol family Jan 23 00:09:20.079569 kernel: Key type dns_resolver registered Jan 23 00:09:20.079574 kernel: registered taskstats version 1 Jan 23 00:09:20.079578 kernel: Loading compiled-in X.509 certificates Jan 23 00:09:20.079583 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 380753d9165686712e58c1d21e00c0268e70f18f' Jan 23 00:09:20.079589 kernel: Demotion targets for Node 0: null Jan 23 00:09:20.079594 kernel: Key type .fscrypt registered Jan 23 00:09:20.079598 kernel: Key type fscrypt-provisioning registered Jan 23 00:09:20.079603 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 00:09:20.079608 kernel: ima: Allocated hash algorithm: sha1 Jan 23 00:09:20.079613 kernel: ima: No architecture policies found Jan 23 00:09:20.079618 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 00:09:20.079622 kernel: clk: Disabling unused clocks Jan 23 00:09:20.079627 kernel: PM: genpd: Disabling unused power domains Jan 23 00:09:20.079633 kernel: Warning: unable to open an initial console. Jan 23 00:09:20.079638 kernel: Freeing unused kernel memory: 39552K Jan 23 00:09:20.079642 kernel: Run /init as init process Jan 23 00:09:20.079647 kernel: with arguments: Jan 23 00:09:20.079652 kernel: /init Jan 23 00:09:20.079656 kernel: with environment: Jan 23 00:09:20.079661 kernel: HOME=/ Jan 23 00:09:20.079665 kernel: TERM=linux Jan 23 00:09:20.079671 systemd[1]: Successfully made /usr/ read-only. Jan 23 00:09:20.079679 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 00:09:20.079685 systemd[1]: Detected virtualization microsoft. Jan 23 00:09:20.079690 systemd[1]: Detected architecture arm64. Jan 23 00:09:20.079695 systemd[1]: Running in initrd. Jan 23 00:09:20.079700 systemd[1]: No hostname configured, using default hostname. Jan 23 00:09:20.079705 systemd[1]: Hostname set to . Jan 23 00:09:20.079710 systemd[1]: Initializing machine ID from random generator. Jan 23 00:09:20.079716 systemd[1]: Queued start job for default target initrd.target. Jan 23 00:09:20.079721 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 00:09:20.079727 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 00:09:20.079732 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 00:09:20.079738 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 00:09:20.079743 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 00:09:20.079749 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 00:09:20.079755 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 00:09:20.079761 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 00:09:20.079766 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 00:09:20.079771 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 00:09:20.079776 systemd[1]: Reached target paths.target - Path Units. Jan 23 00:09:20.079781 systemd[1]: Reached target slices.target - Slice Units. Jan 23 00:09:20.079786 systemd[1]: Reached target swap.target - Swaps. Jan 23 00:09:20.079791 systemd[1]: Reached target timers.target - Timer Units. Jan 23 00:09:20.079797 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 00:09:20.079803 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 00:09:20.079808 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 00:09:20.079813 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 00:09:20.079818 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 00:09:20.079823 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 00:09:20.079828 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 00:09:20.079834 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 00:09:20.079839 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 00:09:20.079845 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 00:09:20.079850 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 00:09:20.083566 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 00:09:20.083576 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 00:09:20.083582 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 00:09:20.083587 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 00:09:20.083613 systemd-journald[225]: Collecting audit messages is disabled. Jan 23 00:09:20.083631 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:09:20.083637 systemd-journald[225]: Journal started Jan 23 00:09:20.083652 systemd-journald[225]: Runtime Journal (/run/log/journal/6b3bbd165bd245688bfab34d1ef590ff) is 8M, max 78.3M, 70.3M free. Jan 23 00:09:20.083974 systemd-modules-load[227]: Inserted module 'overlay' Jan 23 00:09:20.105499 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 00:09:20.105534 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 00:09:20.112572 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 00:09:20.121620 kernel: Bridge firewalling registered Jan 23 00:09:20.116964 systemd-modules-load[227]: Inserted module 'br_netfilter' Jan 23 00:09:20.118086 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 00:09:20.137436 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 00:09:20.141160 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 00:09:20.148941 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:09:20.160141 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 00:09:20.181379 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 00:09:20.186273 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 00:09:20.205041 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 00:09:20.217224 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 00:09:20.228415 systemd-tmpfiles[250]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 00:09:20.235387 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 00:09:20.241111 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 00:09:20.251914 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 00:09:20.264822 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 00:09:20.288985 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 00:09:20.299994 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 00:09:20.315821 dracut-cmdline[262]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=38aa0560e146398cb8c3378a56d449784f1c7652139d7b61279d764fcc4c793a Jan 23 00:09:20.323563 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 00:09:20.354363 systemd-resolved[263]: Positive Trust Anchors: Jan 23 00:09:20.354372 systemd-resolved[263]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 00:09:20.354391 systemd-resolved[263]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 00:09:20.356649 systemd-resolved[263]: Defaulting to hostname 'linux'. Jan 23 00:09:20.358107 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 00:09:20.365677 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 00:09:20.459870 kernel: SCSI subsystem initialized Jan 23 00:09:20.464873 kernel: Loading iSCSI transport class v2.0-870. Jan 23 00:09:20.472893 kernel: iscsi: registered transport (tcp) Jan 23 00:09:20.485769 kernel: iscsi: registered transport (qla4xxx) Jan 23 00:09:20.485809 kernel: QLogic iSCSI HBA Driver Jan 23 00:09:20.498641 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 00:09:20.516355 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 00:09:20.522873 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 00:09:20.568543 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 00:09:20.574613 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 00:09:20.634879 kernel: raid6: neonx8 gen() 18547 MB/s Jan 23 00:09:20.653862 kernel: raid6: neonx4 gen() 18544 MB/s Jan 23 00:09:20.674874 kernel: raid6: neonx2 gen() 17062 MB/s Jan 23 00:09:20.692866 kernel: raid6: neonx1 gen() 15054 MB/s Jan 23 00:09:20.711862 kernel: raid6: int64x8 gen() 10554 MB/s Jan 23 00:09:20.730862 kernel: raid6: int64x4 gen() 10614 MB/s Jan 23 00:09:20.750863 kernel: raid6: int64x2 gen() 8979 MB/s Jan 23 00:09:20.772840 kernel: raid6: int64x1 gen() 7010 MB/s Jan 23 00:09:20.772922 kernel: raid6: using algorithm neonx8 gen() 18547 MB/s Jan 23 00:09:20.795512 kernel: raid6: .... xor() 14900 MB/s, rmw enabled Jan 23 00:09:20.795521 kernel: raid6: using neon recovery algorithm Jan 23 00:09:20.804969 kernel: xor: measuring software checksum speed Jan 23 00:09:20.804980 kernel: 8regs : 28575 MB/sec Jan 23 00:09:20.807570 kernel: 32regs : 28795 MB/sec Jan 23 00:09:20.810263 kernel: arm64_neon : 37666 MB/sec Jan 23 00:09:20.813338 kernel: xor: using function: arm64_neon (37666 MB/sec) Jan 23 00:09:20.851879 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 00:09:20.857217 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 00:09:20.867694 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 00:09:20.897602 systemd-udevd[474]: Using default interface naming scheme 'v255'. Jan 23 00:09:20.900527 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 00:09:20.916012 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 00:09:20.942344 dracut-pre-trigger[485]: rd.md=0: removing MD RAID activation Jan 23 00:09:20.962627 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 00:09:20.969995 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 00:09:21.017833 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 00:09:21.031041 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 00:09:21.089873 kernel: hv_vmbus: Vmbus version:5.3 Jan 23 00:09:21.093726 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 00:09:21.096038 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:09:21.122946 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 23 00:09:21.122963 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 23 00:09:21.122978 kernel: hv_vmbus: registering driver hv_netvsc Jan 23 00:09:21.122985 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 23 00:09:21.123223 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:09:21.144510 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 23 00:09:21.138262 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:09:21.157319 kernel: PTP clock support registered Jan 23 00:09:21.153130 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 00:09:21.167022 kernel: hv_vmbus: registering driver hv_storvsc Jan 23 00:09:21.185032 kernel: hv_utils: Registering HyperV Utility Driver Jan 23 00:09:21.185068 kernel: hv_vmbus: registering driver hid_hyperv Jan 23 00:09:21.185076 kernel: hv_vmbus: registering driver hv_utils Jan 23 00:09:21.182374 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:09:21.207759 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 23 00:09:21.207774 kernel: hv_utils: Heartbeat IC version 3.0 Jan 23 00:09:21.207780 kernel: hv_utils: Shutdown IC version 3.2 Jan 23 00:09:21.207787 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 23 00:09:21.215477 kernel: hv_utils: TimeSync IC version 4.0 Jan 23 00:09:21.215869 kernel: scsi host0: storvsc_host_t Jan 23 00:09:21.681303 systemd-resolved[263]: Clock change detected. Flushing caches. Jan 23 00:09:21.692082 kernel: scsi host1: storvsc_host_t Jan 23 00:09:21.692374 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 23 00:09:21.697982 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 23 00:09:21.698031 kernel: hv_netvsc 7ced8dd4-e250-7ced-8dd4-e2507ced8dd4 eth0: VF slot 1 added Jan 23 00:09:21.711315 kernel: hv_vmbus: registering driver hv_pci Jan 23 00:09:21.721290 kernel: hv_pci 411c3d29-488c-4b46-8262-54b076e1aba0: PCI VMBus probing: Using version 0x10004 Jan 23 00:09:21.721422 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 23 00:09:21.721513 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 23 00:09:21.726703 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 23 00:09:21.726830 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 23 00:09:21.741953 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 23 00:09:21.742107 kernel: hv_pci 411c3d29-488c-4b46-8262-54b076e1aba0: PCI host bridge to bus 488c:00 Jan 23 00:09:21.742188 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#125 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 23 00:09:21.742277 kernel: pci_bus 488c:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 23 00:09:21.750030 kernel: pci_bus 488c:00: No busn resource found for root bus, will use [bus 00-ff] Jan 23 00:09:21.757263 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 23 00:09:21.757408 kernel: pci 488c:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Jan 23 00:09:21.768378 kernel: pci 488c:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 23 00:09:21.772314 kernel: pci 488c:00:02.0: enabling Extended Tags Jan 23 00:09:21.797312 kernel: pci 488c:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 488c:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Jan 23 00:09:21.807650 kernel: pci_bus 488c:00: busn_res: [bus 00-ff] end is updated to 00 Jan 23 00:09:21.807776 kernel: pci 488c:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Jan 23 00:09:21.825130 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 00:09:21.825168 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 23 00:09:21.838086 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 23 00:09:21.838245 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 00:09:21.841274 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 23 00:09:21.858280 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 00:09:21.882280 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#91 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 00:09:21.899050 kernel: mlx5_core 488c:00:02.0: enabling device (0000 -> 0002) Jan 23 00:09:21.907838 kernel: mlx5_core 488c:00:02.0: PTM is not supported by PCIe Jan 23 00:09:21.907955 kernel: mlx5_core 488c:00:02.0: firmware version: 16.30.5026 Jan 23 00:09:22.078300 kernel: hv_netvsc 7ced8dd4-e250-7ced-8dd4-e2507ced8dd4 eth0: VF registering: eth1 Jan 23 00:09:22.078507 kernel: mlx5_core 488c:00:02.0 eth1: joined to eth0 Jan 23 00:09:22.084198 kernel: mlx5_core 488c:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 23 00:09:22.093276 kernel: mlx5_core 488c:00:02.0 enP18572s1: renamed from eth1 Jan 23 00:09:22.307032 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 23 00:09:22.430332 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 23 00:09:22.467781 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 23 00:09:22.528860 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 23 00:09:22.534225 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 23 00:09:22.544965 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 00:09:22.557356 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 00:09:22.566890 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 00:09:22.577679 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 00:09:22.587804 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 00:09:22.608927 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 00:09:22.630662 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#40 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 23 00:09:22.634296 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 00:09:22.649794 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 00:09:23.659944 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#74 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 23 00:09:23.675976 disk-uuid[655]: The operation has completed successfully. Jan 23 00:09:23.682520 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 00:09:23.747278 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 00:09:23.748301 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 00:09:23.776640 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 00:09:23.800476 sh[821]: Success Jan 23 00:09:23.837132 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 00:09:23.837180 kernel: device-mapper: uevent: version 1.0.3 Jan 23 00:09:23.842101 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 00:09:23.852279 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 23 00:09:24.106852 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 00:09:24.114542 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 00:09:24.131047 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 00:09:24.159292 kernel: BTRFS: device fsid 97a43946-ed04-45c1-a355-c0350e8b973e devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (839) Jan 23 00:09:24.159327 kernel: BTRFS info (device dm-0): first mount of filesystem 97a43946-ed04-45c1-a355-c0350e8b973e Jan 23 00:09:24.163825 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 00:09:24.430622 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 00:09:24.430708 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 00:09:24.503412 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 00:09:24.507492 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 00:09:24.514967 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 00:09:24.515667 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 00:09:24.537867 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 00:09:24.567294 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (862) Jan 23 00:09:24.578263 kernel: BTRFS info (device sda6): first mount of filesystem e9ae44b3-0aec-43ca-ad8b-9cf4e242132f Jan 23 00:09:24.578298 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 00:09:24.604815 kernel: BTRFS info (device sda6): turning on async discard Jan 23 00:09:24.604860 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 00:09:24.614291 kernel: BTRFS info (device sda6): last unmount of filesystem e9ae44b3-0aec-43ca-ad8b-9cf4e242132f Jan 23 00:09:24.616473 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 00:09:24.626053 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 00:09:24.664278 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 00:09:24.675099 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 00:09:24.707737 systemd-networkd[1008]: lo: Link UP Jan 23 00:09:24.707747 systemd-networkd[1008]: lo: Gained carrier Jan 23 00:09:24.708851 systemd-networkd[1008]: Enumeration completed Jan 23 00:09:24.708928 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 00:09:24.715052 systemd-networkd[1008]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:09:24.715055 systemd-networkd[1008]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 00:09:24.715563 systemd[1]: Reached target network.target - Network. Jan 23 00:09:24.788270 kernel: mlx5_core 488c:00:02.0 enP18572s1: Link up Jan 23 00:09:24.822274 kernel: hv_netvsc 7ced8dd4-e250-7ced-8dd4-e2507ced8dd4 eth0: Data path switched to VF: enP18572s1 Jan 23 00:09:24.822766 systemd-networkd[1008]: enP18572s1: Link UP Jan 23 00:09:24.822835 systemd-networkd[1008]: eth0: Link UP Jan 23 00:09:24.822900 systemd-networkd[1008]: eth0: Gained carrier Jan 23 00:09:24.822913 systemd-networkd[1008]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:09:24.841613 systemd-networkd[1008]: enP18572s1: Gained carrier Jan 23 00:09:24.853282 systemd-networkd[1008]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 23 00:09:25.885358 ignition[961]: Ignition 2.22.0 Jan 23 00:09:25.885372 ignition[961]: Stage: fetch-offline Jan 23 00:09:25.889405 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 00:09:25.885470 ignition[961]: no configs at "/usr/lib/ignition/base.d" Jan 23 00:09:25.897198 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 00:09:25.885476 ignition[961]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 00:09:25.885553 ignition[961]: parsed url from cmdline: "" Jan 23 00:09:25.885555 ignition[961]: no config URL provided Jan 23 00:09:25.885558 ignition[961]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 00:09:25.885563 ignition[961]: no config at "/usr/lib/ignition/user.ign" Jan 23 00:09:25.885566 ignition[961]: failed to fetch config: resource requires networking Jan 23 00:09:25.886029 ignition[961]: Ignition finished successfully Jan 23 00:09:25.935753 ignition[1018]: Ignition 2.22.0 Jan 23 00:09:25.935766 ignition[1018]: Stage: fetch Jan 23 00:09:25.935933 ignition[1018]: no configs at "/usr/lib/ignition/base.d" Jan 23 00:09:25.935943 ignition[1018]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 00:09:25.936012 ignition[1018]: parsed url from cmdline: "" Jan 23 00:09:25.936014 ignition[1018]: no config URL provided Jan 23 00:09:25.936018 ignition[1018]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 00:09:25.936023 ignition[1018]: no config at "/usr/lib/ignition/user.ign" Jan 23 00:09:25.936037 ignition[1018]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 23 00:09:26.001216 ignition[1018]: GET result: OK Jan 23 00:09:26.003486 ignition[1018]: config has been read from IMDS userdata Jan 23 00:09:26.003507 ignition[1018]: parsing config with SHA512: 9de124add7c9ee50b3d2fc50ce059d00e64cf532e569d981ba7e93cd4b04e4675bf0f4055ae6d6a48fdc8e7ff643665c4de51c2e545c0b806243c83d52b13944 Jan 23 00:09:26.009430 unknown[1018]: fetched base config from "system" Jan 23 00:09:26.009627 ignition[1018]: fetch: fetch complete Jan 23 00:09:26.009435 unknown[1018]: fetched base config from "system" Jan 23 00:09:26.009630 ignition[1018]: fetch: fetch passed Jan 23 00:09:26.009438 unknown[1018]: fetched user config from "azure" Jan 23 00:09:26.009662 ignition[1018]: Ignition finished successfully Jan 23 00:09:26.011276 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 00:09:26.021015 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 00:09:26.059692 ignition[1024]: Ignition 2.22.0 Jan 23 00:09:26.062140 ignition[1024]: Stage: kargs Jan 23 00:09:26.062351 ignition[1024]: no configs at "/usr/lib/ignition/base.d" Jan 23 00:09:26.065771 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 00:09:26.062359 ignition[1024]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 00:09:26.073908 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 00:09:26.062987 ignition[1024]: kargs: kargs passed Jan 23 00:09:26.063029 ignition[1024]: Ignition finished successfully Jan 23 00:09:26.104539 ignition[1030]: Ignition 2.22.0 Jan 23 00:09:26.104553 ignition[1030]: Stage: disks Jan 23 00:09:26.108354 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 00:09:26.104722 ignition[1030]: no configs at "/usr/lib/ignition/base.d" Jan 23 00:09:26.115077 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 00:09:26.104729 ignition[1030]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 00:09:26.123256 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 00:09:26.105228 ignition[1030]: disks: disks passed Jan 23 00:09:26.132144 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 00:09:26.105273 ignition[1030]: Ignition finished successfully Jan 23 00:09:26.140792 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 00:09:26.149521 systemd[1]: Reached target basic.target - Basic System. Jan 23 00:09:26.158949 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 00:09:26.218350 systemd-networkd[1008]: eth0: Gained IPv6LL Jan 23 00:09:26.259137 systemd-fsck[1038]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jan 23 00:09:26.262947 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 00:09:26.275973 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 00:09:26.531271 kernel: EXT4-fs (sda9): mounted filesystem f31390ab-27e9-47d9-a374-053913301d53 r/w with ordered data mode. Quota mode: none. Jan 23 00:09:26.532107 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 00:09:26.535849 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 00:09:26.568574 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 00:09:26.574051 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 00:09:26.586801 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 23 00:09:26.598487 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 00:09:26.598519 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 00:09:26.604761 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 00:09:26.632751 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1053) Jan 23 00:09:26.629110 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 00:09:26.654195 kernel: BTRFS info (device sda6): first mount of filesystem e9ae44b3-0aec-43ca-ad8b-9cf4e242132f Jan 23 00:09:26.654223 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 00:09:26.664993 kernel: BTRFS info (device sda6): turning on async discard Jan 23 00:09:26.665037 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 00:09:26.666242 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 00:09:27.183769 coreos-metadata[1055]: Jan 23 00:09:27.183 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 23 00:09:27.190686 coreos-metadata[1055]: Jan 23 00:09:27.190 INFO Fetch successful Jan 23 00:09:27.190686 coreos-metadata[1055]: Jan 23 00:09:27.190 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 23 00:09:27.203966 coreos-metadata[1055]: Jan 23 00:09:27.203 INFO Fetch successful Jan 23 00:09:27.218306 coreos-metadata[1055]: Jan 23 00:09:27.218 INFO wrote hostname ci-4459.2.2-n-aedec2d11e to /sysroot/etc/hostname Jan 23 00:09:27.226299 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 00:09:27.346588 initrd-setup-root[1084]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 00:09:27.383429 initrd-setup-root[1091]: cut: /sysroot/etc/group: No such file or directory Jan 23 00:09:27.405648 initrd-setup-root[1098]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 00:09:27.412804 initrd-setup-root[1105]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 00:09:28.401227 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 00:09:28.407636 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 00:09:28.427885 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 00:09:28.439592 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 00:09:28.448688 kernel: BTRFS info (device sda6): last unmount of filesystem e9ae44b3-0aec-43ca-ad8b-9cf4e242132f Jan 23 00:09:28.471412 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 00:09:28.476330 ignition[1177]: INFO : Ignition 2.22.0 Jan 23 00:09:28.476330 ignition[1177]: INFO : Stage: mount Jan 23 00:09:28.476330 ignition[1177]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 00:09:28.476330 ignition[1177]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 00:09:28.476330 ignition[1177]: INFO : mount: mount passed Jan 23 00:09:28.476330 ignition[1177]: INFO : Ignition finished successfully Jan 23 00:09:28.481376 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 00:09:28.487293 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 00:09:28.512369 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 00:09:28.542268 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1189) Jan 23 00:09:28.552329 kernel: BTRFS info (device sda6): first mount of filesystem e9ae44b3-0aec-43ca-ad8b-9cf4e242132f Jan 23 00:09:28.552367 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 00:09:28.561536 kernel: BTRFS info (device sda6): turning on async discard Jan 23 00:09:28.561577 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 00:09:28.563100 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 00:09:28.592924 ignition[1207]: INFO : Ignition 2.22.0 Jan 23 00:09:28.592924 ignition[1207]: INFO : Stage: files Jan 23 00:09:28.599418 ignition[1207]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 00:09:28.599418 ignition[1207]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 00:09:28.599418 ignition[1207]: DEBUG : files: compiled without relabeling support, skipping Jan 23 00:09:28.612898 ignition[1207]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 00:09:28.612898 ignition[1207]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 00:09:28.656979 ignition[1207]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 00:09:28.663265 ignition[1207]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 00:09:28.663265 ignition[1207]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 00:09:28.663265 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 23 00:09:28.663265 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 23 00:09:28.658188 unknown[1207]: wrote ssh authorized keys file for user: core Jan 23 00:09:28.705181 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 00:09:28.883550 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 23 00:09:28.891420 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 00:09:28.891420 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 00:09:28.891420 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 00:09:28.891420 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 00:09:28.891420 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 00:09:28.891420 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 00:09:28.891420 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 00:09:28.891420 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 00:09:28.948288 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 00:09:28.948288 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 00:09:28.948288 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 00:09:28.948288 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 00:09:28.948288 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 00:09:28.948288 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 23 00:09:29.463389 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 00:09:29.700113 ignition[1207]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 00:09:29.700113 ignition[1207]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 00:09:29.745137 ignition[1207]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 00:09:29.754069 ignition[1207]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 00:09:29.754069 ignition[1207]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 00:09:29.754069 ignition[1207]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 23 00:09:29.754069 ignition[1207]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 00:09:29.754069 ignition[1207]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 00:09:29.754069 ignition[1207]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 00:09:29.754069 ignition[1207]: INFO : files: files passed Jan 23 00:09:29.754069 ignition[1207]: INFO : Ignition finished successfully Jan 23 00:09:29.755500 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 00:09:29.767013 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 00:09:29.797872 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 00:09:29.811477 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 00:09:29.817644 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 00:09:29.846900 initrd-setup-root-after-ignition[1236]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 00:09:29.846900 initrd-setup-root-after-ignition[1236]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 00:09:29.859740 initrd-setup-root-after-ignition[1240]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 00:09:29.854107 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 00:09:29.865021 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 00:09:29.876496 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 00:09:29.920622 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 00:09:29.920712 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 00:09:29.930125 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 00:09:29.939904 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 00:09:29.948183 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 00:09:29.948841 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 00:09:29.985586 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 00:09:29.991893 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 00:09:30.017582 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 00:09:30.022785 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 00:09:30.032555 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 00:09:30.041048 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 00:09:30.041141 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 00:09:30.054158 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 00:09:30.058896 systemd[1]: Stopped target basic.target - Basic System. Jan 23 00:09:30.070080 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 00:09:30.080104 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 00:09:30.089898 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 00:09:30.100578 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 00:09:30.110087 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 00:09:30.118895 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 00:09:30.129249 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 00:09:30.137880 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 00:09:30.147532 systemd[1]: Stopped target swap.target - Swaps. Jan 23 00:09:30.155629 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 00:09:30.155742 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 00:09:30.167188 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 00:09:30.172470 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 00:09:30.181840 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 00:09:30.185965 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 00:09:30.191872 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 00:09:30.191965 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 00:09:30.206079 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 00:09:30.206162 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 00:09:30.211826 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 00:09:30.278329 ignition[1260]: INFO : Ignition 2.22.0 Jan 23 00:09:30.278329 ignition[1260]: INFO : Stage: umount Jan 23 00:09:30.278329 ignition[1260]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 00:09:30.278329 ignition[1260]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 00:09:30.278329 ignition[1260]: INFO : umount: umount passed Jan 23 00:09:30.278329 ignition[1260]: INFO : Ignition finished successfully Jan 23 00:09:30.211895 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 00:09:30.220620 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 23 00:09:30.220684 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 00:09:30.233011 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 00:09:30.248642 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 00:09:30.263340 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 00:09:30.263452 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 00:09:30.281638 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 00:09:30.281718 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 00:09:30.299773 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 00:09:30.300368 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 00:09:30.300455 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 00:09:30.309689 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 00:09:30.309771 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 00:09:30.318841 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 00:09:30.318883 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 00:09:30.329090 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 00:09:30.329135 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 00:09:30.338326 systemd[1]: Stopped target network.target - Network. Jan 23 00:09:30.348241 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 00:09:30.348302 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 00:09:30.358758 systemd[1]: Stopped target paths.target - Path Units. Jan 23 00:09:30.368270 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 00:09:30.377025 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 00:09:30.382777 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 00:09:30.391896 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 00:09:30.400726 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 00:09:30.400769 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 00:09:30.410264 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 00:09:30.410291 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 00:09:30.418668 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 00:09:30.418715 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 00:09:30.427493 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 00:09:30.427521 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 00:09:30.437055 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 00:09:30.445806 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 00:09:30.459492 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 00:09:30.459580 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 00:09:30.474111 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 00:09:30.474350 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 00:09:30.474441 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 00:09:30.483740 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 00:09:30.483916 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 00:09:30.483992 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 00:09:30.494665 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 00:09:30.716810 kernel: hv_netvsc 7ced8dd4-e250-7ced-8dd4-e2507ced8dd4 eth0: Data path switched from VF: enP18572s1 Jan 23 00:09:30.496268 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 00:09:30.505979 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 00:09:30.514137 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 00:09:30.514173 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 00:09:30.523504 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 00:09:30.523558 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 00:09:30.535920 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 00:09:30.552545 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 00:09:30.552605 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 00:09:30.561141 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 00:09:30.561180 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 00:09:30.569603 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 00:09:30.569639 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 00:09:30.579952 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 00:09:30.579985 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 00:09:30.593591 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 00:09:30.602486 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 00:09:30.602540 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 00:09:30.618988 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 00:09:30.619126 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 00:09:30.629747 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 00:09:30.629809 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 00:09:30.639064 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 00:09:30.639102 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 00:09:30.648224 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 00:09:30.648284 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 00:09:30.662879 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 00:09:30.662918 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 00:09:30.671140 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 00:09:30.671185 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 00:09:30.686970 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 00:09:30.702491 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 00:09:30.702541 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 00:09:30.716696 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 00:09:30.716736 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 00:09:30.726293 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 23 00:09:30.726338 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 00:09:30.741800 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 00:09:30.741838 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 00:09:30.755728 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 00:09:30.755770 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:09:30.769923 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 00:09:30.769968 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jan 23 00:09:30.769989 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 00:09:30.770016 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 00:09:30.770337 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 00:09:30.770424 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 00:09:30.839306 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 00:09:30.839422 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 00:09:30.848187 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 00:09:30.856740 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 00:09:30.881456 systemd[1]: Switching root. Jan 23 00:09:31.006207 systemd-journald[225]: Journal stopped Jan 23 00:09:35.547992 systemd-journald[225]: Received SIGTERM from PID 1 (systemd). Jan 23 00:09:35.548013 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 00:09:35.548021 kernel: SELinux: policy capability open_perms=1 Jan 23 00:09:35.548026 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 00:09:35.548033 kernel: SELinux: policy capability always_check_network=0 Jan 23 00:09:35.548038 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 00:09:35.548045 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 00:09:35.548050 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 00:09:35.548055 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 00:09:35.548060 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 00:09:35.548066 kernel: audit: type=1403 audit(1769126972.032:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 00:09:35.548072 systemd[1]: Successfully loaded SELinux policy in 207.040ms. Jan 23 00:09:35.548079 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.317ms. Jan 23 00:09:35.548085 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 00:09:35.548092 systemd[1]: Detected virtualization microsoft. Jan 23 00:09:35.548099 systemd[1]: Detected architecture arm64. Jan 23 00:09:35.548104 systemd[1]: Detected first boot. Jan 23 00:09:35.548110 systemd[1]: Hostname set to . Jan 23 00:09:35.548116 systemd[1]: Initializing machine ID from random generator. Jan 23 00:09:35.548122 zram_generator::config[1303]: No configuration found. Jan 23 00:09:35.548128 kernel: NET: Registered PF_VSOCK protocol family Jan 23 00:09:35.548134 systemd[1]: Populated /etc with preset unit settings. Jan 23 00:09:35.548140 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 00:09:35.548147 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 00:09:35.548153 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 00:09:35.548159 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 00:09:35.548165 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 00:09:35.548189 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 00:09:35.548195 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 00:09:35.548201 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 00:09:35.548208 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 00:09:35.548214 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 00:09:35.548220 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 00:09:35.548226 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 00:09:35.548232 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 00:09:35.548238 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 00:09:35.548244 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 00:09:35.548260 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 00:09:35.548268 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 00:09:35.548275 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 00:09:35.548282 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 23 00:09:35.548288 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 00:09:35.548295 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 00:09:35.548301 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 00:09:35.548307 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 00:09:35.548313 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 00:09:35.548320 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 00:09:35.548326 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 00:09:35.548333 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 00:09:35.548339 systemd[1]: Reached target slices.target - Slice Units. Jan 23 00:09:35.548345 systemd[1]: Reached target swap.target - Swaps. Jan 23 00:09:35.548351 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 00:09:35.548357 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 00:09:35.548365 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 00:09:35.548371 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 00:09:35.548377 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 00:09:35.548383 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 00:09:35.548389 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 00:09:35.548395 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 00:09:35.548403 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 00:09:35.548409 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 00:09:35.548415 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 00:09:35.548421 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 00:09:35.548427 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 00:09:35.548434 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 00:09:35.548440 systemd[1]: Reached target machines.target - Containers. Jan 23 00:09:35.548447 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 00:09:35.548454 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 00:09:35.548460 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 00:09:35.548467 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 00:09:35.548473 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 00:09:35.548479 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 00:09:35.548486 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 00:09:35.548492 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 00:09:35.548498 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 00:09:35.548505 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 00:09:35.548511 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 00:09:35.548518 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 00:09:35.548524 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 00:09:35.548530 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 00:09:35.548536 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 00:09:35.548543 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 00:09:35.548549 kernel: loop: module loaded Jan 23 00:09:35.548555 kernel: fuse: init (API version 7.41) Jan 23 00:09:35.548560 kernel: ACPI: bus type drm_connector registered Jan 23 00:09:35.548580 systemd-journald[1397]: Collecting audit messages is disabled. Jan 23 00:09:35.548593 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 00:09:35.548601 systemd-journald[1397]: Journal started Jan 23 00:09:35.548615 systemd-journald[1397]: Runtime Journal (/run/log/journal/ca48d6e4bbf24428a269885598ffebea) is 8M, max 78.3M, 70.3M free. Jan 23 00:09:34.869455 systemd[1]: Queued start job for default target multi-user.target. Jan 23 00:09:34.879721 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 23 00:09:34.880101 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 00:09:34.880381 systemd[1]: systemd-journald.service: Consumed 2.480s CPU time. Jan 23 00:09:35.568338 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 00:09:35.579282 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 00:09:35.591694 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 00:09:35.598342 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 00:09:35.609378 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 00:09:35.609427 systemd[1]: Stopped verity-setup.service. Jan 23 00:09:35.622703 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 00:09:35.623327 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 00:09:35.627782 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 00:09:35.632553 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 00:09:35.636776 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 00:09:35.641507 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 00:09:35.646084 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 00:09:35.650368 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 00:09:35.655341 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 00:09:35.660627 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 00:09:35.660764 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 00:09:35.666183 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 00:09:35.666311 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 00:09:35.671473 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 00:09:35.671587 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 00:09:35.676140 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 00:09:35.676351 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 00:09:35.681878 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 00:09:35.681988 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 00:09:35.686696 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 00:09:35.686814 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 00:09:35.691582 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 00:09:35.696706 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 00:09:35.702381 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 00:09:35.707757 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 00:09:35.713278 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 00:09:35.726822 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 00:09:35.732420 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 00:09:35.742774 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 00:09:35.749991 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 00:09:35.750021 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 00:09:35.755115 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 00:09:35.761152 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 00:09:35.765326 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 00:09:35.781384 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 00:09:35.793751 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 00:09:35.798557 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 00:09:35.801361 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 00:09:35.805907 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 00:09:35.806874 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 00:09:35.818083 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 00:09:35.825461 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 00:09:35.835669 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 00:09:35.844687 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 00:09:35.852290 systemd-journald[1397]: Time spent on flushing to /var/log/journal/ca48d6e4bbf24428a269885598ffebea is 8.848ms for 934 entries. Jan 23 00:09:35.852290 systemd-journald[1397]: System Journal (/var/log/journal/ca48d6e4bbf24428a269885598ffebea) is 8M, max 2.6G, 2.6G free. Jan 23 00:09:35.877922 systemd-journald[1397]: Received client request to flush runtime journal. Jan 23 00:09:35.860476 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 00:09:35.866710 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 00:09:35.875431 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 00:09:35.883583 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 00:09:35.906628 kernel: loop0: detected capacity change from 0 to 100632 Jan 23 00:09:35.927413 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 00:09:35.927951 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 00:09:35.958058 systemd-tmpfiles[1444]: ACLs are not supported, ignoring. Jan 23 00:09:35.958074 systemd-tmpfiles[1444]: ACLs are not supported, ignoring. Jan 23 00:09:35.958405 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 00:09:35.963578 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 00:09:35.973389 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 00:09:36.031032 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 00:09:36.037283 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 00:09:36.051565 systemd-tmpfiles[1460]: ACLs are not supported, ignoring. Jan 23 00:09:36.051580 systemd-tmpfiles[1460]: ACLs are not supported, ignoring. Jan 23 00:09:36.054497 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 00:09:36.349280 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 00:09:36.411844 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 00:09:36.421267 kernel: loop1: detected capacity change from 0 to 27936 Jan 23 00:09:36.421896 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 00:09:36.448122 systemd-udevd[1468]: Using default interface naming scheme 'v255'. Jan 23 00:09:36.801135 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 00:09:36.812604 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 00:09:36.868504 kernel: loop2: detected capacity change from 0 to 119840 Jan 23 00:09:36.864423 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 00:09:36.926800 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 23 00:09:36.933144 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 00:09:36.960382 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#19 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 00:09:36.975493 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 00:09:37.030152 kernel: hv_vmbus: registering driver hv_balloon Jan 23 00:09:37.030246 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 23 00:09:37.034128 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 23 00:09:37.080304 kernel: hv_vmbus: registering driver hyperv_fb Jan 23 00:09:37.083336 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 23 00:09:37.089718 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 23 00:09:37.090626 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:09:37.098198 kernel: Console: switching to colour dummy device 80x25 Jan 23 00:09:37.106200 kernel: Console: switching to colour frame buffer device 128x48 Jan 23 00:09:37.108653 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 00:09:37.108838 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:09:37.115865 systemd-networkd[1486]: lo: Link UP Jan 23 00:09:37.115876 systemd-networkd[1486]: lo: Gained carrier Jan 23 00:09:37.116827 systemd-networkd[1486]: Enumeration completed Jan 23 00:09:37.117695 systemd-networkd[1486]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:09:37.117706 systemd-networkd[1486]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 00:09:37.118117 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:09:37.122947 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 00:09:37.131133 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 00:09:37.138423 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 00:09:37.146031 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 00:09:37.148678 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:09:37.155165 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 00:09:37.161611 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:09:37.186267 kernel: mlx5_core 488c:00:02.0 enP18572s1: Link up Jan 23 00:09:37.215851 kernel: hv_netvsc 7ced8dd4-e250-7ced-8dd4-e2507ced8dd4 eth0: Data path switched to VF: enP18572s1 Jan 23 00:09:37.216403 systemd-networkd[1486]: enP18572s1: Link UP Jan 23 00:09:37.217179 systemd-networkd[1486]: eth0: Link UP Jan 23 00:09:37.217185 systemd-networkd[1486]: eth0: Gained carrier Jan 23 00:09:37.217205 systemd-networkd[1486]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:09:37.218978 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 00:09:37.226524 systemd-networkd[1486]: enP18572s1: Gained carrier Jan 23 00:09:37.233619 systemd-networkd[1486]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 23 00:09:37.249283 kernel: loop3: detected capacity change from 0 to 207008 Jan 23 00:09:37.280388 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 23 00:09:37.286944 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 00:09:37.297593 kernel: MACsec IEEE 802.1AE Jan 23 00:09:37.328297 kernel: loop4: detected capacity change from 0 to 100632 Jan 23 00:09:37.340485 kernel: loop5: detected capacity change from 0 to 27936 Jan 23 00:09:37.353327 kernel: loop6: detected capacity change from 0 to 119840 Jan 23 00:09:37.358964 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 00:09:37.378294 kernel: loop7: detected capacity change from 0 to 207008 Jan 23 00:09:37.390329 (sd-merge)[1612]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 23 00:09:37.390749 (sd-merge)[1612]: Merged extensions into '/usr'. Jan 23 00:09:37.394088 systemd[1]: Reload requested from client PID 1442 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 00:09:37.394108 systemd[1]: Reloading... Jan 23 00:09:37.454286 zram_generator::config[1641]: No configuration found. Jan 23 00:09:37.622678 systemd[1]: Reloading finished in 227 ms. Jan 23 00:09:37.638510 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 00:09:37.657394 systemd[1]: Starting ensure-sysext.service... Jan 23 00:09:37.663442 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 00:09:37.679788 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:09:37.686823 systemd[1]: Reload requested from client PID 1701 ('systemctl') (unit ensure-sysext.service)... Jan 23 00:09:37.686838 systemd[1]: Reloading... Jan 23 00:09:37.696455 systemd-tmpfiles[1702]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 00:09:37.697059 systemd-tmpfiles[1702]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 00:09:37.697286 systemd-tmpfiles[1702]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 00:09:37.697429 systemd-tmpfiles[1702]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 00:09:37.697847 systemd-tmpfiles[1702]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 00:09:37.697981 systemd-tmpfiles[1702]: ACLs are not supported, ignoring. Jan 23 00:09:37.698007 systemd-tmpfiles[1702]: ACLs are not supported, ignoring. Jan 23 00:09:37.705030 systemd-tmpfiles[1702]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 00:09:37.705130 systemd-tmpfiles[1702]: Skipping /boot Jan 23 00:09:37.709781 systemd-tmpfiles[1702]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 00:09:37.711364 systemd-tmpfiles[1702]: Skipping /boot Jan 23 00:09:37.746854 zram_generator::config[1735]: No configuration found. Jan 23 00:09:37.898212 systemd[1]: Reloading finished in 211 ms. Jan 23 00:09:37.909248 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 00:09:37.934970 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 00:09:37.964520 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 00:09:37.969391 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 00:09:37.975909 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 00:09:37.984386 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 00:09:37.991328 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 00:09:37.995665 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 00:09:37.995765 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 00:09:37.998436 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 00:09:38.006430 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 00:09:38.013345 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 00:09:38.020709 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 00:09:38.020862 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 00:09:38.026279 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 00:09:38.026412 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 00:09:38.032040 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 00:09:38.032175 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 00:09:38.042961 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 00:09:38.046428 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 00:09:38.053525 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 00:09:38.063146 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 00:09:38.067172 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 00:09:38.067282 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 00:09:38.072484 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 00:09:38.078966 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 00:09:38.079095 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 00:09:38.084101 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 00:09:38.084219 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 00:09:38.090194 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 00:09:38.090339 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 00:09:38.099814 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 00:09:38.100791 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 00:09:38.110615 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 00:09:38.117422 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 00:09:38.124741 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 00:09:38.130557 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 00:09:38.130654 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 00:09:38.130756 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 00:09:38.135990 systemd-resolved[1799]: Positive Trust Anchors: Jan 23 00:09:38.136002 systemd-resolved[1799]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 00:09:38.136022 systemd-resolved[1799]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 00:09:38.136422 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 00:09:38.137101 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 00:09:38.143765 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 00:09:38.144018 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 00:09:38.149780 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 00:09:38.149909 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 00:09:38.155391 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 00:09:38.155511 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 00:09:38.156831 systemd-resolved[1799]: Using system hostname 'ci-4459.2.2-n-aedec2d11e'. Jan 23 00:09:38.160054 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 00:09:38.168911 systemd[1]: Finished ensure-sysext.service. Jan 23 00:09:38.175420 systemd[1]: Reached target network.target - Network. Jan 23 00:09:38.179790 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 00:09:38.184563 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 00:09:38.184620 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 00:09:38.220602 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 00:09:38.231932 augenrules[1840]: No rules Jan 23 00:09:38.233033 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 00:09:38.233224 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 00:09:38.737776 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 00:09:38.743799 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 00:09:39.018401 systemd-networkd[1486]: eth0: Gained IPv6LL Jan 23 00:09:39.020268 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 00:09:39.025719 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 00:09:41.198205 ldconfig[1437]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 00:09:41.210970 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 00:09:41.217325 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 00:09:41.229723 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 00:09:41.234690 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 00:09:41.239165 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 00:09:41.244259 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 00:09:41.249857 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 00:09:41.254310 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 00:09:41.259486 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 00:09:41.264940 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 00:09:41.264964 systemd[1]: Reached target paths.target - Path Units. Jan 23 00:09:41.268629 systemd[1]: Reached target timers.target - Timer Units. Jan 23 00:09:41.273690 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 00:09:41.279572 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 00:09:41.284871 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 00:09:41.290518 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 00:09:41.295827 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 00:09:41.311840 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 00:09:41.316470 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 00:09:41.321910 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 00:09:41.326692 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 00:09:41.330642 systemd[1]: Reached target basic.target - Basic System. Jan 23 00:09:41.334549 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 00:09:41.334572 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 00:09:41.336718 systemd[1]: Starting chronyd.service - NTP client/server... Jan 23 00:09:41.348343 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 00:09:41.354426 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 00:09:41.360414 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 00:09:41.367374 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 00:09:41.373424 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 00:09:41.385427 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 00:09:41.390395 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 00:09:41.392158 jq[1861]: false Jan 23 00:09:41.393356 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 23 00:09:41.397617 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 23 00:09:41.399348 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:09:41.404551 KVP[1863]: KVP starting; pid is:1863 Jan 23 00:09:41.406375 chronyd[1853]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 23 00:09:41.414002 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 00:09:41.420222 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 00:09:41.426269 kernel: hv_utils: KVP IC version 4.0 Jan 23 00:09:41.426556 KVP[1863]: KVP LIC Version: 3.1 Jan 23 00:09:41.430979 chronyd[1853]: Timezone right/UTC failed leap second check, ignoring Jan 23 00:09:41.431114 chronyd[1853]: Loaded seccomp filter (level 2) Jan 23 00:09:41.431666 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 00:09:41.438406 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 00:09:41.444792 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 00:09:41.454405 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 00:09:41.458920 extend-filesystems[1862]: Found /dev/sda6 Jan 23 00:09:41.461446 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 00:09:41.461791 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 00:09:41.463733 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 00:09:41.472752 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 00:09:41.479629 extend-filesystems[1862]: Found /dev/sda9 Jan 23 00:09:41.480401 systemd[1]: Started chronyd.service - NTP client/server. Jan 23 00:09:41.486459 jq[1885]: true Jan 23 00:09:41.487759 extend-filesystems[1862]: Checking size of /dev/sda9 Jan 23 00:09:41.491589 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 00:09:41.499832 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 00:09:41.499997 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 00:09:41.525200 update_engine[1882]: I20260123 00:09:41.514632 1882 main.cc:92] Flatcar Update Engine starting Jan 23 00:09:41.501542 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 00:09:41.501681 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 00:09:41.509796 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 00:09:41.509945 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 00:09:41.531870 (ntainerd)[1893]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 00:09:41.534707 jq[1892]: true Jan 23 00:09:41.562588 systemd-logind[1875]: New seat seat0. Jan 23 00:09:41.564018 systemd-logind[1875]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 00:09:41.564187 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 00:09:41.588344 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 00:09:41.596362 extend-filesystems[1862]: Old size kept for /dev/sda9 Jan 23 00:09:41.603067 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 00:09:41.603275 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 00:09:41.619260 tar[1891]: linux-arm64/LICENSE Jan 23 00:09:41.619260 tar[1891]: linux-arm64/helm Jan 23 00:09:41.630806 bash[1920]: Updated "/home/core/.ssh/authorized_keys" Jan 23 00:09:41.634832 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 00:09:41.643927 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 23 00:09:41.721862 dbus-daemon[1856]: [system] SELinux support is enabled Jan 23 00:09:41.722258 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 00:09:41.733002 update_engine[1882]: I20260123 00:09:41.732800 1882 update_check_scheduler.cc:74] Next update check in 7m8s Jan 23 00:09:41.733444 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 00:09:41.733569 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 00:09:41.733837 dbus-daemon[1856]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 23 00:09:41.743952 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 00:09:41.743970 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 00:09:41.753621 systemd[1]: Started update-engine.service - Update Engine. Jan 23 00:09:41.768613 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 00:09:41.785508 coreos-metadata[1855]: Jan 23 00:09:41.783 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 23 00:09:41.795364 coreos-metadata[1855]: Jan 23 00:09:41.789 INFO Fetch successful Jan 23 00:09:41.795364 coreos-metadata[1855]: Jan 23 00:09:41.789 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 23 00:09:41.795600 coreos-metadata[1855]: Jan 23 00:09:41.795 INFO Fetch successful Jan 23 00:09:41.795600 coreos-metadata[1855]: Jan 23 00:09:41.795 INFO Fetching http://168.63.129.16/machine/336649cc-d70b-472d-af41-363cf76955bc/c56fe7bb%2D220e%2D478f%2D947d%2D55d9667856cf.%5Fci%2D4459.2.2%2Dn%2Daedec2d11e?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 23 00:09:41.798925 coreos-metadata[1855]: Jan 23 00:09:41.798 INFO Fetch successful Jan 23 00:09:41.798925 coreos-metadata[1855]: Jan 23 00:09:41.798 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 23 00:09:41.809710 coreos-metadata[1855]: Jan 23 00:09:41.809 INFO Fetch successful Jan 23 00:09:41.846795 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 00:09:41.854849 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 00:09:41.956023 sshd_keygen[1878]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 00:09:41.985459 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 00:09:41.996299 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 00:09:42.003941 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 23 00:09:42.024154 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 00:09:42.027299 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 00:09:42.031618 locksmithd[1994]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 00:09:42.038766 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 00:09:42.049589 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 23 00:09:42.072463 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 00:09:42.083478 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 00:09:42.089827 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 23 00:09:42.097087 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 00:09:42.166220 tar[1891]: linux-arm64/README.md Jan 23 00:09:42.181865 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 00:09:42.215701 containerd[1893]: time="2026-01-23T00:09:42Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 00:09:42.217271 containerd[1893]: time="2026-01-23T00:09:42.216531752Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 00:09:42.222909 containerd[1893]: time="2026-01-23T00:09:42.222879440Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.256µs" Jan 23 00:09:42.222990 containerd[1893]: time="2026-01-23T00:09:42.222969592Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 00:09:42.223057 containerd[1893]: time="2026-01-23T00:09:42.223045552Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 00:09:42.223953 containerd[1893]: time="2026-01-23T00:09:42.223931392Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 00:09:42.224027 containerd[1893]: time="2026-01-23T00:09:42.224016504Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 00:09:42.224082 containerd[1893]: time="2026-01-23T00:09:42.224072720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 00:09:42.224190 containerd[1893]: time="2026-01-23T00:09:42.224176776Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 00:09:42.224239 containerd[1893]: time="2026-01-23T00:09:42.224227400Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 00:09:42.224478 containerd[1893]: time="2026-01-23T00:09:42.224458552Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 00:09:42.224549 containerd[1893]: time="2026-01-23T00:09:42.224535032Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 00:09:42.224597 containerd[1893]: time="2026-01-23T00:09:42.224584912Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 00:09:42.224636 containerd[1893]: time="2026-01-23T00:09:42.224624312Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 00:09:42.224758 containerd[1893]: time="2026-01-23T00:09:42.224743336Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 00:09:42.224985 containerd[1893]: time="2026-01-23T00:09:42.224968256Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 00:09:42.225067 containerd[1893]: time="2026-01-23T00:09:42.225055416Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 00:09:42.225104 containerd[1893]: time="2026-01-23T00:09:42.225095816Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 00:09:42.225167 containerd[1893]: time="2026-01-23T00:09:42.225156536Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 00:09:42.225382 containerd[1893]: time="2026-01-23T00:09:42.225365368Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 00:09:42.225502 containerd[1893]: time="2026-01-23T00:09:42.225488192Z" level=info msg="metadata content store policy set" policy=shared Jan 23 00:09:42.236217 containerd[1893]: time="2026-01-23T00:09:42.236134680Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 00:09:42.237684 containerd[1893]: time="2026-01-23T00:09:42.236299240Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 00:09:42.237684 containerd[1893]: time="2026-01-23T00:09:42.236320304Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 00:09:42.237684 containerd[1893]: time="2026-01-23T00:09:42.236329872Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 00:09:42.237684 containerd[1893]: time="2026-01-23T00:09:42.236338624Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 00:09:42.237684 containerd[1893]: time="2026-01-23T00:09:42.236345792Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 00:09:42.237684 containerd[1893]: time="2026-01-23T00:09:42.236357592Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 00:09:42.237684 containerd[1893]: time="2026-01-23T00:09:42.236365112Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 00:09:42.237684 containerd[1893]: time="2026-01-23T00:09:42.236377040Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 00:09:42.237684 containerd[1893]: time="2026-01-23T00:09:42.236383480Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 00:09:42.237684 containerd[1893]: time="2026-01-23T00:09:42.236389312Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 00:09:42.237684 containerd[1893]: time="2026-01-23T00:09:42.236397912Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 00:09:42.237684 containerd[1893]: time="2026-01-23T00:09:42.236499280Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 00:09:42.237684 containerd[1893]: time="2026-01-23T00:09:42.236512880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 00:09:42.237684 containerd[1893]: time="2026-01-23T00:09:42.236524880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 00:09:42.237898 containerd[1893]: time="2026-01-23T00:09:42.236532592Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 00:09:42.237898 containerd[1893]: time="2026-01-23T00:09:42.236539496Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 00:09:42.237898 containerd[1893]: time="2026-01-23T00:09:42.236546368Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 00:09:42.237898 containerd[1893]: time="2026-01-23T00:09:42.236553968Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 00:09:42.237898 containerd[1893]: time="2026-01-23T00:09:42.236560920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 00:09:42.237898 containerd[1893]: time="2026-01-23T00:09:42.236568800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 00:09:42.237898 containerd[1893]: time="2026-01-23T00:09:42.236577088Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 00:09:42.237898 containerd[1893]: time="2026-01-23T00:09:42.236583792Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 00:09:42.237898 containerd[1893]: time="2026-01-23T00:09:42.236624272Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 00:09:42.237898 containerd[1893]: time="2026-01-23T00:09:42.236634016Z" level=info msg="Start snapshots syncer" Jan 23 00:09:42.237898 containerd[1893]: time="2026-01-23T00:09:42.236657584Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 00:09:42.238029 containerd[1893]: time="2026-01-23T00:09:42.236831616Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 00:09:42.238029 containerd[1893]: time="2026-01-23T00:09:42.236864928Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 00:09:42.238106 containerd[1893]: time="2026-01-23T00:09:42.236902152Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 00:09:42.238106 containerd[1893]: time="2026-01-23T00:09:42.236988752Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 00:09:42.238106 containerd[1893]: time="2026-01-23T00:09:42.237002760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 00:09:42.238106 containerd[1893]: time="2026-01-23T00:09:42.237010048Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 00:09:42.238106 containerd[1893]: time="2026-01-23T00:09:42.237018712Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 00:09:42.238106 containerd[1893]: time="2026-01-23T00:09:42.237026016Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 00:09:42.238106 containerd[1893]: time="2026-01-23T00:09:42.237032560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 00:09:42.238106 containerd[1893]: time="2026-01-23T00:09:42.237039088Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 00:09:42.238106 containerd[1893]: time="2026-01-23T00:09:42.237055480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 00:09:42.238106 containerd[1893]: time="2026-01-23T00:09:42.237062480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 00:09:42.238106 containerd[1893]: time="2026-01-23T00:09:42.237070472Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 00:09:42.238106 containerd[1893]: time="2026-01-23T00:09:42.237089864Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 00:09:42.238106 containerd[1893]: time="2026-01-23T00:09:42.237101688Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 00:09:42.238106 containerd[1893]: time="2026-01-23T00:09:42.237106528Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 00:09:42.238270 containerd[1893]: time="2026-01-23T00:09:42.237112040Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 00:09:42.238270 containerd[1893]: time="2026-01-23T00:09:42.237116456Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 00:09:42.238270 containerd[1893]: time="2026-01-23T00:09:42.237122520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 00:09:42.238270 containerd[1893]: time="2026-01-23T00:09:42.237128816Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 00:09:42.238270 containerd[1893]: time="2026-01-23T00:09:42.237139176Z" level=info msg="runtime interface created" Jan 23 00:09:42.238270 containerd[1893]: time="2026-01-23T00:09:42.237142696Z" level=info msg="created NRI interface" Jan 23 00:09:42.238270 containerd[1893]: time="2026-01-23T00:09:42.237147848Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 00:09:42.238270 containerd[1893]: time="2026-01-23T00:09:42.237155384Z" level=info msg="Connect containerd service" Jan 23 00:09:42.238270 containerd[1893]: time="2026-01-23T00:09:42.237174000Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 00:09:42.238270 containerd[1893]: time="2026-01-23T00:09:42.237740256Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 00:09:42.396965 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:09:42.407730 (kubelet)[2052]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:09:42.568580 containerd[1893]: time="2026-01-23T00:09:42.568430568Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 00:09:42.568580 containerd[1893]: time="2026-01-23T00:09:42.568484256Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 00:09:42.568580 containerd[1893]: time="2026-01-23T00:09:42.568507936Z" level=info msg="Start subscribing containerd event" Jan 23 00:09:42.568580 containerd[1893]: time="2026-01-23T00:09:42.568545824Z" level=info msg="Start recovering state" Jan 23 00:09:42.568580 containerd[1893]: time="2026-01-23T00:09:42.568611000Z" level=info msg="Start event monitor" Jan 23 00:09:42.568580 containerd[1893]: time="2026-01-23T00:09:42.568619872Z" level=info msg="Start cni network conf syncer for default" Jan 23 00:09:42.568580 containerd[1893]: time="2026-01-23T00:09:42.568624408Z" level=info msg="Start streaming server" Jan 23 00:09:42.568580 containerd[1893]: time="2026-01-23T00:09:42.568630136Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 00:09:42.568580 containerd[1893]: time="2026-01-23T00:09:42.568634440Z" level=info msg="runtime interface starting up..." Jan 23 00:09:42.571470 containerd[1893]: time="2026-01-23T00:09:42.568638184Z" level=info msg="starting plugins..." Jan 23 00:09:42.571470 containerd[1893]: time="2026-01-23T00:09:42.568649040Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 00:09:42.571470 containerd[1893]: time="2026-01-23T00:09:42.568734848Z" level=info msg="containerd successfully booted in 0.353351s" Jan 23 00:09:42.568846 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 00:09:42.576205 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 00:09:42.587248 systemd[1]: Startup finished in 1.713s (kernel) + 11.735s (initrd) + 10.759s (userspace) = 24.208s. Jan 23 00:09:42.771552 login[2034]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:09:42.771936 login[2033]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:09:42.775360 kubelet[2052]: E0123 00:09:42.775313 2052 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:09:42.777301 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:09:42.777415 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:09:42.778509 systemd[1]: kubelet.service: Consumed 544ms CPU time, 254M memory peak. Jan 23 00:09:42.794523 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 00:09:42.795428 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 00:09:42.800048 systemd-logind[1875]: New session 2 of user core. Jan 23 00:09:42.803854 systemd-logind[1875]: New session 1 of user core. Jan 23 00:09:42.827757 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 00:09:42.831508 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 00:09:42.852390 (systemd)[2071]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 00:09:42.855178 systemd-logind[1875]: New session c1 of user core. Jan 23 00:09:42.963743 systemd[2071]: Queued start job for default target default.target. Jan 23 00:09:42.971007 systemd[2071]: Created slice app.slice - User Application Slice. Jan 23 00:09:42.971028 systemd[2071]: Reached target paths.target - Paths. Jan 23 00:09:42.971057 systemd[2071]: Reached target timers.target - Timers. Jan 23 00:09:42.972010 systemd[2071]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 00:09:42.979449 systemd[2071]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 00:09:42.979579 systemd[2071]: Reached target sockets.target - Sockets. Jan 23 00:09:42.979683 systemd[2071]: Reached target basic.target - Basic System. Jan 23 00:09:42.979771 systemd[2071]: Reached target default.target - Main User Target. Jan 23 00:09:42.979793 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 00:09:42.979875 systemd[2071]: Startup finished in 119ms. Jan 23 00:09:42.985361 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 00:09:42.985865 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 00:09:43.665928 waagent[2031]: 2026-01-23T00:09:43.665854Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jan 23 00:09:43.670479 waagent[2031]: 2026-01-23T00:09:43.670433Z INFO Daemon Daemon OS: flatcar 4459.2.2 Jan 23 00:09:43.673713 waagent[2031]: 2026-01-23T00:09:43.673683Z INFO Daemon Daemon Python: 3.11.13 Jan 23 00:09:43.676964 waagent[2031]: 2026-01-23T00:09:43.676913Z INFO Daemon Daemon Run daemon Jan 23 00:09:43.679962 waagent[2031]: 2026-01-23T00:09:43.679801Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.2.2' Jan 23 00:09:43.686482 waagent[2031]: 2026-01-23T00:09:43.686444Z INFO Daemon Daemon Using waagent for provisioning Jan 23 00:09:43.690354 waagent[2031]: 2026-01-23T00:09:43.690321Z INFO Daemon Daemon Activate resource disk Jan 23 00:09:43.693751 waagent[2031]: 2026-01-23T00:09:43.693720Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 23 00:09:43.702098 waagent[2031]: 2026-01-23T00:09:43.702063Z INFO Daemon Daemon Found device: None Jan 23 00:09:43.705376 waagent[2031]: 2026-01-23T00:09:43.705345Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 23 00:09:43.711485 waagent[2031]: 2026-01-23T00:09:43.711459Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 23 00:09:43.720218 waagent[2031]: 2026-01-23T00:09:43.720181Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 23 00:09:43.724444 waagent[2031]: 2026-01-23T00:09:43.724415Z INFO Daemon Daemon Running default provisioning handler Jan 23 00:09:43.733754 waagent[2031]: 2026-01-23T00:09:43.733710Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 23 00:09:43.743622 waagent[2031]: 2026-01-23T00:09:43.743586Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 23 00:09:43.750779 waagent[2031]: 2026-01-23T00:09:43.750750Z INFO Daemon Daemon cloud-init is enabled: False Jan 23 00:09:43.754472 waagent[2031]: 2026-01-23T00:09:43.754444Z INFO Daemon Daemon Copying ovf-env.xml Jan 23 00:09:43.851278 waagent[2031]: 2026-01-23T00:09:43.850680Z INFO Daemon Daemon Successfully mounted dvd Jan 23 00:09:43.877165 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 23 00:09:43.878891 waagent[2031]: 2026-01-23T00:09:43.878840Z INFO Daemon Daemon Detect protocol endpoint Jan 23 00:09:43.882546 waagent[2031]: 2026-01-23T00:09:43.882512Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 23 00:09:43.886751 waagent[2031]: 2026-01-23T00:09:43.886721Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 23 00:09:43.892057 waagent[2031]: 2026-01-23T00:09:43.892033Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 23 00:09:43.895962 waagent[2031]: 2026-01-23T00:09:43.895934Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 23 00:09:43.899953 waagent[2031]: 2026-01-23T00:09:43.899928Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 23 00:09:43.948195 waagent[2031]: 2026-01-23T00:09:43.948124Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 23 00:09:43.953398 waagent[2031]: 2026-01-23T00:09:43.953377Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 23 00:09:43.957509 waagent[2031]: 2026-01-23T00:09:43.957483Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 23 00:09:44.086076 waagent[2031]: 2026-01-23T00:09:44.081323Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 23 00:09:44.086412 waagent[2031]: 2026-01-23T00:09:44.086369Z INFO Daemon Daemon Forcing an update of the goal state. Jan 23 00:09:44.093892 waagent[2031]: 2026-01-23T00:09:44.093855Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 23 00:09:44.109948 waagent[2031]: 2026-01-23T00:09:44.109914Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 23 00:09:44.114481 waagent[2031]: 2026-01-23T00:09:44.114448Z INFO Daemon Jan 23 00:09:44.116650 waagent[2031]: 2026-01-23T00:09:44.116618Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 8b3404c1-6ac0-439e-a880-910fb9d428da eTag: 14057718830604087572 source: Fabric] Jan 23 00:09:44.125128 waagent[2031]: 2026-01-23T00:09:44.125095Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 23 00:09:44.130292 waagent[2031]: 2026-01-23T00:09:44.130244Z INFO Daemon Jan 23 00:09:44.132755 waagent[2031]: 2026-01-23T00:09:44.132725Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 23 00:09:44.142850 waagent[2031]: 2026-01-23T00:09:44.142824Z INFO Daemon Daemon Downloading artifacts profile blob Jan 23 00:09:44.201687 waagent[2031]: 2026-01-23T00:09:44.201612Z INFO Daemon Downloaded certificate {'thumbprint': '58CECBA5F5631786812853AE2ED20F365DAA5F4B', 'hasPrivateKey': True} Jan 23 00:09:44.209192 waagent[2031]: 2026-01-23T00:09:44.209157Z INFO Daemon Fetch goal state completed Jan 23 00:09:44.217965 waagent[2031]: 2026-01-23T00:09:44.217937Z INFO Daemon Daemon Starting provisioning Jan 23 00:09:44.221895 waagent[2031]: 2026-01-23T00:09:44.221861Z INFO Daemon Daemon Handle ovf-env.xml. Jan 23 00:09:44.225485 waagent[2031]: 2026-01-23T00:09:44.225453Z INFO Daemon Daemon Set hostname [ci-4459.2.2-n-aedec2d11e] Jan 23 00:09:44.231507 waagent[2031]: 2026-01-23T00:09:44.231475Z INFO Daemon Daemon Publish hostname [ci-4459.2.2-n-aedec2d11e] Jan 23 00:09:44.236092 waagent[2031]: 2026-01-23T00:09:44.236055Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 23 00:09:44.241227 waagent[2031]: 2026-01-23T00:09:44.241194Z INFO Daemon Daemon Primary interface is [eth0] Jan 23 00:09:44.250931 systemd-networkd[1486]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:09:44.251394 systemd-networkd[1486]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 00:09:44.251513 systemd-networkd[1486]: eth0: DHCP lease lost Jan 23 00:09:44.251809 waagent[2031]: 2026-01-23T00:09:44.251774Z INFO Daemon Daemon Create user account if not exists Jan 23 00:09:44.256081 waagent[2031]: 2026-01-23T00:09:44.256048Z INFO Daemon Daemon User core already exists, skip useradd Jan 23 00:09:44.260159 waagent[2031]: 2026-01-23T00:09:44.260129Z INFO Daemon Daemon Configure sudoer Jan 23 00:09:44.266878 waagent[2031]: 2026-01-23T00:09:44.266841Z INFO Daemon Daemon Configure sshd Jan 23 00:09:44.273619 waagent[2031]: 2026-01-23T00:09:44.273581Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 23 00:09:44.282823 waagent[2031]: 2026-01-23T00:09:44.282786Z INFO Daemon Daemon Deploy ssh public key. Jan 23 00:09:44.291296 systemd-networkd[1486]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 23 00:09:45.371111 waagent[2031]: 2026-01-23T00:09:45.367687Z INFO Daemon Daemon Provisioning complete Jan 23 00:09:45.380905 waagent[2031]: 2026-01-23T00:09:45.380871Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 23 00:09:45.385289 waagent[2031]: 2026-01-23T00:09:45.385249Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 23 00:09:45.391933 waagent[2031]: 2026-01-23T00:09:45.391905Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jan 23 00:09:45.490492 waagent[2121]: 2026-01-23T00:09:45.490420Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jan 23 00:09:45.490764 waagent[2121]: 2026-01-23T00:09:45.490543Z INFO ExtHandler ExtHandler OS: flatcar 4459.2.2 Jan 23 00:09:45.490764 waagent[2121]: 2026-01-23T00:09:45.490582Z INFO ExtHandler ExtHandler Python: 3.11.13 Jan 23 00:09:45.490764 waagent[2121]: 2026-01-23T00:09:45.490616Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Jan 23 00:09:45.537481 waagent[2121]: 2026-01-23T00:09:45.537417Z INFO ExtHandler ExtHandler Distro: flatcar-4459.2.2; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jan 23 00:09:45.537618 waagent[2121]: 2026-01-23T00:09:45.537590Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 00:09:45.537657 waagent[2121]: 2026-01-23T00:09:45.537639Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 00:09:45.543126 waagent[2121]: 2026-01-23T00:09:45.543081Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 23 00:09:45.547707 waagent[2121]: 2026-01-23T00:09:45.547677Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 23 00:09:45.548045 waagent[2121]: 2026-01-23T00:09:45.548016Z INFO ExtHandler Jan 23 00:09:45.548095 waagent[2121]: 2026-01-23T00:09:45.548077Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: fddb82c0-eda7-425f-954e-43a264a94344 eTag: 14057718830604087572 source: Fabric] Jan 23 00:09:45.548332 waagent[2121]: 2026-01-23T00:09:45.548305Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 23 00:09:45.548728 waagent[2121]: 2026-01-23T00:09:45.548699Z INFO ExtHandler Jan 23 00:09:45.548765 waagent[2121]: 2026-01-23T00:09:45.548749Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 23 00:09:45.551873 waagent[2121]: 2026-01-23T00:09:45.551845Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 23 00:09:45.601324 waagent[2121]: 2026-01-23T00:09:45.601245Z INFO ExtHandler Downloaded certificate {'thumbprint': '58CECBA5F5631786812853AE2ED20F365DAA5F4B', 'hasPrivateKey': True} Jan 23 00:09:45.601704 waagent[2121]: 2026-01-23T00:09:45.601667Z INFO ExtHandler Fetch goal state completed Jan 23 00:09:45.613152 waagent[2121]: 2026-01-23T00:09:45.613109Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Jan 23 00:09:45.616580 waagent[2121]: 2026-01-23T00:09:45.616534Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2121 Jan 23 00:09:45.616681 waagent[2121]: 2026-01-23T00:09:45.616655Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 23 00:09:45.616918 waagent[2121]: 2026-01-23T00:09:45.616891Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jan 23 00:09:45.617997 waagent[2121]: 2026-01-23T00:09:45.617963Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.2.2', '', 'Flatcar Container Linux by Kinvolk'] Jan 23 00:09:45.618333 waagent[2121]: 2026-01-23T00:09:45.618302Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.2.2', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jan 23 00:09:45.618449 waagent[2121]: 2026-01-23T00:09:45.618426Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jan 23 00:09:45.618848 waagent[2121]: 2026-01-23T00:09:45.618820Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 23 00:09:45.669663 waagent[2121]: 2026-01-23T00:09:45.669565Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 23 00:09:45.669784 waagent[2121]: 2026-01-23T00:09:45.669753Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 23 00:09:45.674421 waagent[2121]: 2026-01-23T00:09:45.674356Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 23 00:09:45.678972 systemd[1]: Reload requested from client PID 2136 ('systemctl') (unit waagent.service)... Jan 23 00:09:45.678987 systemd[1]: Reloading... Jan 23 00:09:45.749276 zram_generator::config[2175]: No configuration found. Jan 23 00:09:45.899161 systemd[1]: Reloading finished in 219 ms. Jan 23 00:09:45.914308 waagent[2121]: 2026-01-23T00:09:45.913960Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 23 00:09:45.914308 waagent[2121]: 2026-01-23T00:09:45.914095Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 23 00:09:47.106292 waagent[2121]: 2026-01-23T00:09:47.105892Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 23 00:09:47.106292 waagent[2121]: 2026-01-23T00:09:47.106187Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jan 23 00:09:47.106841 waagent[2121]: 2026-01-23T00:09:47.106800Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 23 00:09:47.107164 waagent[2121]: 2026-01-23T00:09:47.107135Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 00:09:47.107261 waagent[2121]: 2026-01-23T00:09:47.107198Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 00:09:47.107294 waagent[2121]: 2026-01-23T00:09:47.107263Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 23 00:09:47.107494 waagent[2121]: 2026-01-23T00:09:47.107451Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 23 00:09:47.107667 waagent[2121]: 2026-01-23T00:09:47.107633Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 23 00:09:47.107893 waagent[2121]: 2026-01-23T00:09:47.107820Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 23 00:09:47.107939 waagent[2121]: 2026-01-23T00:09:47.107910Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 23 00:09:47.107939 waagent[2121]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 23 00:09:47.107939 waagent[2121]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 23 00:09:47.107939 waagent[2121]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 23 00:09:47.107939 waagent[2121]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 23 00:09:47.107939 waagent[2121]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 23 00:09:47.107939 waagent[2121]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 23 00:09:47.108183 waagent[2121]: 2026-01-23T00:09:47.108153Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 00:09:47.108635 waagent[2121]: 2026-01-23T00:09:47.108567Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 23 00:09:47.108678 waagent[2121]: 2026-01-23T00:09:47.108644Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 23 00:09:47.108850 waagent[2121]: 2026-01-23T00:09:47.108820Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 00:09:47.110147 waagent[2121]: 2026-01-23T00:09:47.109086Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 23 00:09:47.110389 waagent[2121]: 2026-01-23T00:09:47.110306Z INFO EnvHandler ExtHandler Configure routes Jan 23 00:09:47.112756 waagent[2121]: 2026-01-23T00:09:47.112718Z INFO EnvHandler ExtHandler Gateway:None Jan 23 00:09:47.112815 waagent[2121]: 2026-01-23T00:09:47.112793Z INFO EnvHandler ExtHandler Routes:None Jan 23 00:09:47.114569 waagent[2121]: 2026-01-23T00:09:47.114538Z INFO ExtHandler ExtHandler Jan 23 00:09:47.114694 waagent[2121]: 2026-01-23T00:09:47.114672Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 5ddcd422-9e94-4319-8e63-a402ab0ba723 correlation d44fb598-91c0-49f0-9bd6-8c3ece6bbc3c created: 2026-01-23T00:08:50.048279Z] Jan 23 00:09:47.115013 waagent[2121]: 2026-01-23T00:09:47.114983Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 23 00:09:47.115501 waagent[2121]: 2026-01-23T00:09:47.115472Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jan 23 00:09:47.155281 waagent[2121]: 2026-01-23T00:09:47.154936Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jan 23 00:09:47.155281 waagent[2121]: Try `iptables -h' or 'iptables --help' for more information.) Jan 23 00:09:47.155470 waagent[2121]: 2026-01-23T00:09:47.155439Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: B266A2B4-577E-400B-A57C-80E1BDD01AA0;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jan 23 00:09:47.202037 waagent[2121]: 2026-01-23T00:09:47.201966Z INFO MonitorHandler ExtHandler Network interfaces: Jan 23 00:09:47.202037 waagent[2121]: Executing ['ip', '-a', '-o', 'link']: Jan 23 00:09:47.202037 waagent[2121]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 23 00:09:47.202037 waagent[2121]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:d4:e2:50 brd ff:ff:ff:ff:ff:ff Jan 23 00:09:47.202037 waagent[2121]: 3: enP18572s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:d4:e2:50 brd ff:ff:ff:ff:ff:ff\ altname enP18572p0s2 Jan 23 00:09:47.202037 waagent[2121]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 23 00:09:47.202037 waagent[2121]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 23 00:09:47.202037 waagent[2121]: 2: eth0 inet 10.200.20.38/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 23 00:09:47.202037 waagent[2121]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 23 00:09:47.202037 waagent[2121]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 23 00:09:47.202037 waagent[2121]: 2: eth0 inet6 fe80::7eed:8dff:fed4:e250/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 23 00:09:47.305931 waagent[2121]: 2026-01-23T00:09:47.305871Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jan 23 00:09:47.305931 waagent[2121]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 00:09:47.305931 waagent[2121]: pkts bytes target prot opt in out source destination Jan 23 00:09:47.305931 waagent[2121]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 23 00:09:47.305931 waagent[2121]: pkts bytes target prot opt in out source destination Jan 23 00:09:47.305931 waagent[2121]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 00:09:47.305931 waagent[2121]: pkts bytes target prot opt in out source destination Jan 23 00:09:47.305931 waagent[2121]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 23 00:09:47.305931 waagent[2121]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 23 00:09:47.305931 waagent[2121]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 23 00:09:47.308687 waagent[2121]: 2026-01-23T00:09:47.308388Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 23 00:09:47.308687 waagent[2121]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 00:09:47.308687 waagent[2121]: pkts bytes target prot opt in out source destination Jan 23 00:09:47.308687 waagent[2121]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 23 00:09:47.308687 waagent[2121]: pkts bytes target prot opt in out source destination Jan 23 00:09:47.308687 waagent[2121]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 00:09:47.308687 waagent[2121]: pkts bytes target prot opt in out source destination Jan 23 00:09:47.308687 waagent[2121]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 23 00:09:47.308687 waagent[2121]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 23 00:09:47.308687 waagent[2121]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 23 00:09:47.308687 waagent[2121]: 2026-01-23T00:09:47.308601Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 23 00:09:53.028057 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 00:09:53.029349 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:09:53.131359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:09:53.137594 (kubelet)[2270]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:09:53.245264 kubelet[2270]: E0123 00:09:53.245205 2270 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:09:53.248084 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:09:53.248202 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:09:53.248698 systemd[1]: kubelet.service: Consumed 110ms CPU time, 105M memory peak. Jan 23 00:10:03.485535 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 00:10:03.488444 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:10:03.797924 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:10:03.802665 (kubelet)[2284]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:10:03.832350 kubelet[2284]: E0123 00:10:03.832294 2284 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:10:03.834432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:10:03.834547 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:10:03.835045 systemd[1]: kubelet.service: Consumed 109ms CPU time, 105.3M memory peak. Jan 23 00:10:05.247335 chronyd[1853]: Selected source PHC0 Jan 23 00:10:06.254636 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 00:10:06.256189 systemd[1]: Started sshd@0-10.200.20.38:22-10.200.16.10:35044.service - OpenSSH per-connection server daemon (10.200.16.10:35044). Jan 23 00:10:06.890638 sshd[2293]: Accepted publickey for core from 10.200.16.10 port 35044 ssh2: RSA SHA256:kRQEAzNVhqU4Fmpx84sKU93gp2nZjfuJ8Tlyw3EYXBc Jan 23 00:10:06.891720 sshd-session[2293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:10:06.895719 systemd-logind[1875]: New session 3 of user core. Jan 23 00:10:06.914388 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 00:10:07.320461 systemd[1]: Started sshd@1-10.200.20.38:22-10.200.16.10:35060.service - OpenSSH per-connection server daemon (10.200.16.10:35060). Jan 23 00:10:07.783338 sshd[2299]: Accepted publickey for core from 10.200.16.10 port 35060 ssh2: RSA SHA256:kRQEAzNVhqU4Fmpx84sKU93gp2nZjfuJ8Tlyw3EYXBc Jan 23 00:10:07.784437 sshd-session[2299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:10:07.788412 systemd-logind[1875]: New session 4 of user core. Jan 23 00:10:07.798385 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 00:10:08.115761 sshd[2302]: Connection closed by 10.200.16.10 port 35060 Jan 23 00:10:08.116418 sshd-session[2299]: pam_unix(sshd:session): session closed for user core Jan 23 00:10:08.120236 systemd-logind[1875]: Session 4 logged out. Waiting for processes to exit. Jan 23 00:10:08.120643 systemd[1]: sshd@1-10.200.20.38:22-10.200.16.10:35060.service: Deactivated successfully. Jan 23 00:10:08.121931 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 00:10:08.123367 systemd-logind[1875]: Removed session 4. Jan 23 00:10:08.201884 systemd[1]: Started sshd@2-10.200.20.38:22-10.200.16.10:35068.service - OpenSSH per-connection server daemon (10.200.16.10:35068). Jan 23 00:10:08.665156 sshd[2308]: Accepted publickey for core from 10.200.16.10 port 35068 ssh2: RSA SHA256:kRQEAzNVhqU4Fmpx84sKU93gp2nZjfuJ8Tlyw3EYXBc Jan 23 00:10:08.668059 sshd-session[2308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:10:08.672150 systemd-logind[1875]: New session 5 of user core. Jan 23 00:10:08.680420 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 00:10:08.995280 sshd[2311]: Connection closed by 10.200.16.10 port 35068 Jan 23 00:10:08.995486 sshd-session[2308]: pam_unix(sshd:session): session closed for user core Jan 23 00:10:08.998757 systemd[1]: sshd@2-10.200.20.38:22-10.200.16.10:35068.service: Deactivated successfully. Jan 23 00:10:09.000716 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 00:10:09.001468 systemd-logind[1875]: Session 5 logged out. Waiting for processes to exit. Jan 23 00:10:09.002564 systemd-logind[1875]: Removed session 5. Jan 23 00:10:09.086028 systemd[1]: Started sshd@3-10.200.20.38:22-10.200.16.10:35072.service - OpenSSH per-connection server daemon (10.200.16.10:35072). Jan 23 00:10:09.577335 sshd[2317]: Accepted publickey for core from 10.200.16.10 port 35072 ssh2: RSA SHA256:kRQEAzNVhqU4Fmpx84sKU93gp2nZjfuJ8Tlyw3EYXBc Jan 23 00:10:09.578418 sshd-session[2317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:10:09.581948 systemd-logind[1875]: New session 6 of user core. Jan 23 00:10:09.589378 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 00:10:09.927368 sshd[2320]: Connection closed by 10.200.16.10 port 35072 Jan 23 00:10:09.927911 sshd-session[2317]: pam_unix(sshd:session): session closed for user core Jan 23 00:10:09.931858 systemd[1]: sshd@3-10.200.20.38:22-10.200.16.10:35072.service: Deactivated successfully. Jan 23 00:10:09.933238 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 00:10:09.934883 systemd-logind[1875]: Session 6 logged out. Waiting for processes to exit. Jan 23 00:10:09.935802 systemd-logind[1875]: Removed session 6. Jan 23 00:10:10.016078 systemd[1]: Started sshd@4-10.200.20.38:22-10.200.16.10:44798.service - OpenSSH per-connection server daemon (10.200.16.10:44798). Jan 23 00:10:10.506965 sshd[2326]: Accepted publickey for core from 10.200.16.10 port 44798 ssh2: RSA SHA256:kRQEAzNVhqU4Fmpx84sKU93gp2nZjfuJ8Tlyw3EYXBc Jan 23 00:10:10.508077 sshd-session[2326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:10:10.512866 systemd-logind[1875]: New session 7 of user core. Jan 23 00:10:10.519411 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 00:10:10.912563 sudo[2330]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 00:10:10.912789 sudo[2330]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 00:10:12.374619 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 00:10:12.384556 (dockerd)[2348]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 00:10:13.234233 dockerd[2348]: time="2026-01-23T00:10:13.234175083Z" level=info msg="Starting up" Jan 23 00:10:13.234857 dockerd[2348]: time="2026-01-23T00:10:13.234833603Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 00:10:13.243412 dockerd[2348]: time="2026-01-23T00:10:13.243372203Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 00:10:13.927785 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 00:10:13.929962 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:10:14.040682 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:10:14.047751 (kubelet)[2377]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:10:14.145428 kubelet[2377]: E0123 00:10:14.145375 2377 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:10:14.147615 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:10:14.147734 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:10:14.148244 systemd[1]: kubelet.service: Consumed 112ms CPU time, 105.7M memory peak. Jan 23 00:10:17.492128 dockerd[2348]: time="2026-01-23T00:10:17.492077540Z" level=info msg="Loading containers: start." Jan 23 00:10:17.541278 kernel: Initializing XFRM netlink socket Jan 23 00:10:17.897440 systemd-networkd[1486]: docker0: Link UP Jan 23 00:10:17.910104 dockerd[2348]: time="2026-01-23T00:10:17.910007477Z" level=info msg="Loading containers: done." Jan 23 00:10:17.920308 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3156880867-merged.mount: Deactivated successfully. Jan 23 00:10:18.742747 dockerd[2348]: time="2026-01-23T00:10:18.742665836Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 00:10:18.743522 dockerd[2348]: time="2026-01-23T00:10:18.743198419Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 00:10:18.743522 dockerd[2348]: time="2026-01-23T00:10:18.743345520Z" level=info msg="Initializing buildkit" Jan 23 00:10:18.936452 dockerd[2348]: time="2026-01-23T00:10:18.936398383Z" level=info msg="Completed buildkit initialization" Jan 23 00:10:18.942345 dockerd[2348]: time="2026-01-23T00:10:18.942300690Z" level=info msg="Daemon has completed initialization" Jan 23 00:10:18.942589 dockerd[2348]: time="2026-01-23T00:10:18.942362028Z" level=info msg="API listen on /run/docker.sock" Jan 23 00:10:18.942633 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 00:10:19.586784 containerd[1893]: time="2026-01-23T00:10:19.586737462Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 23 00:10:20.494846 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2302532909.mount: Deactivated successfully. Jan 23 00:10:24.234988 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 23 00:10:24.236782 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:10:26.032436 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 23 00:10:26.164401 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:10:26.168696 (kubelet)[2601]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:10:26.197376 kubelet[2601]: E0123 00:10:26.197292 2601 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:10:26.199528 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:10:26.199641 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:10:26.200169 systemd[1]: kubelet.service: Consumed 109ms CPU time, 107.3M memory peak. Jan 23 00:10:26.692067 containerd[1893]: time="2026-01-23T00:10:26.692006087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:10:26.695170 containerd[1893]: time="2026-01-23T00:10:26.695137077Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=26441982" Jan 23 00:10:26.697240 containerd[1893]: time="2026-01-23T00:10:26.697213731Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:10:26.701845 containerd[1893]: time="2026-01-23T00:10:26.701244187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:10:26.701845 containerd[1893]: time="2026-01-23T00:10:26.701700401Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 7.11492381s" Jan 23 00:10:26.701845 containerd[1893]: time="2026-01-23T00:10:26.701734954Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 23 00:10:26.702391 containerd[1893]: time="2026-01-23T00:10:26.702265722Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 23 00:10:27.388287 update_engine[1882]: I20260123 00:10:27.388123 1882 update_attempter.cc:509] Updating boot flags... Jan 23 00:10:28.168048 containerd[1893]: time="2026-01-23T00:10:28.167989682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:10:28.170838 containerd[1893]: time="2026-01-23T00:10:28.170803230Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22622086" Jan 23 00:10:28.173211 containerd[1893]: time="2026-01-23T00:10:28.173184230Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:10:28.177512 containerd[1893]: time="2026-01-23T00:10:28.177479198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:10:28.178117 containerd[1893]: time="2026-01-23T00:10:28.178086744Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.475613896s" Jan 23 00:10:28.178157 containerd[1893]: time="2026-01-23T00:10:28.178121193Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 23 00:10:28.178956 containerd[1893]: time="2026-01-23T00:10:28.178774957Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 23 00:10:29.411304 containerd[1893]: time="2026-01-23T00:10:29.411236601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:10:29.415160 containerd[1893]: time="2026-01-23T00:10:29.414938683Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17616747" Jan 23 00:10:29.418180 containerd[1893]: time="2026-01-23T00:10:29.418152179Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:10:29.421819 containerd[1893]: time="2026-01-23T00:10:29.421782354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:10:29.422701 containerd[1893]: time="2026-01-23T00:10:29.422553935Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.243751162s" Jan 23 00:10:29.422701 containerd[1893]: time="2026-01-23T00:10:29.422582496Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 23 00:10:29.423101 containerd[1893]: time="2026-01-23T00:10:29.423079051Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 23 00:10:30.876203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4055181704.mount: Deactivated successfully. Jan 23 00:10:31.145364 containerd[1893]: time="2026-01-23T00:10:31.145033263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:10:31.149023 containerd[1893]: time="2026-01-23T00:10:31.148989314Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=27558724" Jan 23 00:10:31.151336 containerd[1893]: time="2026-01-23T00:10:31.151293384Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:10:31.154695 containerd[1893]: time="2026-01-23T00:10:31.154654966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:10:31.155054 containerd[1893]: time="2026-01-23T00:10:31.154902183Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.731731849s" Jan 23 00:10:31.155054 containerd[1893]: time="2026-01-23T00:10:31.154933920Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 23 00:10:31.155369 containerd[1893]: time="2026-01-23T00:10:31.155327519Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 23 00:10:31.761942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3309721497.mount: Deactivated successfully. Jan 23 00:10:32.708648 containerd[1893]: time="2026-01-23T00:10:32.708589317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:10:32.710482 containerd[1893]: time="2026-01-23T00:10:32.710293781Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jan 23 00:10:32.712627 containerd[1893]: time="2026-01-23T00:10:32.712599739Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:10:32.716347 containerd[1893]: time="2026-01-23T00:10:32.716317142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:10:32.717071 containerd[1893]: time="2026-01-23T00:10:32.716842241Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.561487201s" Jan 23 00:10:32.717071 containerd[1893]: time="2026-01-23T00:10:32.716873210Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 23 00:10:32.717450 containerd[1893]: time="2026-01-23T00:10:32.717427175Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 00:10:33.296549 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount154890025.mount: Deactivated successfully. Jan 23 00:10:33.311281 containerd[1893]: time="2026-01-23T00:10:33.310764621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 00:10:33.313066 containerd[1893]: time="2026-01-23T00:10:33.313039802Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 23 00:10:33.315351 containerd[1893]: time="2026-01-23T00:10:33.315329551Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 00:10:33.318225 containerd[1893]: time="2026-01-23T00:10:33.318196866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 00:10:33.318547 containerd[1893]: time="2026-01-23T00:10:33.318517518Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 600.992636ms" Jan 23 00:10:33.318547 containerd[1893]: time="2026-01-23T00:10:33.318547783Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 23 00:10:33.319111 containerd[1893]: time="2026-01-23T00:10:33.319074971Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 23 00:10:33.927121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2051204318.mount: Deactivated successfully. Jan 23 00:10:36.187347 containerd[1893]: time="2026-01-23T00:10:36.187294431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:10:36.189663 containerd[1893]: time="2026-01-23T00:10:36.189630534Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Jan 23 00:10:36.234778 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 23 00:10:36.236102 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:10:36.392029 containerd[1893]: time="2026-01-23T00:10:36.391969450Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:10:36.562626 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:10:36.570520 (kubelet)[2844]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:10:36.595754 kubelet[2844]: E0123 00:10:36.595694 2844 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:10:36.597958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:10:36.598190 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:10:36.598787 systemd[1]: kubelet.service: Consumed 108ms CPU time, 105.4M memory peak. Jan 23 00:10:37.043478 containerd[1893]: time="2026-01-23T00:10:37.043278925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:10:37.044770 containerd[1893]: time="2026-01-23T00:10:37.044737938Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.725627165s" Jan 23 00:10:37.044844 containerd[1893]: time="2026-01-23T00:10:37.044805164Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 23 00:10:39.157909 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:10:39.158020 systemd[1]: kubelet.service: Consumed 108ms CPU time, 105.4M memory peak. Jan 23 00:10:39.160000 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:10:39.181506 systemd[1]: Reload requested from client PID 2874 ('systemctl') (unit session-7.scope)... Jan 23 00:10:39.181516 systemd[1]: Reloading... Jan 23 00:10:39.282285 zram_generator::config[2927]: No configuration found. Jan 23 00:10:39.419095 systemd[1]: Reloading finished in 237 ms. Jan 23 00:10:39.452674 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 00:10:39.452892 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 00:10:39.453365 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:10:39.453490 systemd[1]: kubelet.service: Consumed 78ms CPU time, 95M memory peak. Jan 23 00:10:39.455497 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:10:39.713681 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:10:39.723542 (kubelet)[2988]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 00:10:39.789578 kubelet[2988]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 00:10:39.789578 kubelet[2988]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 00:10:39.789578 kubelet[2988]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 00:10:39.892817 kubelet[2988]: I0123 00:10:39.789609 2988 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 00:10:40.101156 kubelet[2988]: I0123 00:10:40.101004 2988 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 00:10:40.101156 kubelet[2988]: I0123 00:10:40.101037 2988 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 00:10:40.101703 kubelet[2988]: I0123 00:10:40.101678 2988 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 00:10:40.121710 kubelet[2988]: E0123 00:10:40.121657 2988 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" Jan 23 00:10:40.123196 kubelet[2988]: I0123 00:10:40.123162 2988 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 00:10:40.128111 kubelet[2988]: I0123 00:10:40.128091 2988 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 00:10:40.130922 kubelet[2988]: I0123 00:10:40.130893 2988 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 00:10:40.131285 kubelet[2988]: I0123 00:10:40.131237 2988 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 00:10:40.131491 kubelet[2988]: I0123 00:10:40.131352 2988 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-n-aedec2d11e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 00:10:40.131625 kubelet[2988]: I0123 00:10:40.131611 2988 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 00:10:40.131672 kubelet[2988]: I0123 00:10:40.131664 2988 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 00:10:40.131889 kubelet[2988]: I0123 00:10:40.131872 2988 state_mem.go:36] "Initialized new in-memory state store" Jan 23 00:10:40.135005 kubelet[2988]: I0123 00:10:40.134735 2988 kubelet.go:446] "Attempting to sync node with API server" Jan 23 00:10:40.135005 kubelet[2988]: I0123 00:10:40.134765 2988 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 00:10:40.135005 kubelet[2988]: I0123 00:10:40.134787 2988 kubelet.go:352] "Adding apiserver pod source" Jan 23 00:10:40.135005 kubelet[2988]: I0123 00:10:40.134797 2988 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 00:10:40.139044 kubelet[2988]: W0123 00:10:40.138427 2988 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-n-aedec2d11e&limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Jan 23 00:10:40.139044 kubelet[2988]: E0123 00:10:40.138483 2988 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-n-aedec2d11e&limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" Jan 23 00:10:40.139044 kubelet[2988]: W0123 00:10:40.138837 2988 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Jan 23 00:10:40.139044 kubelet[2988]: E0123 00:10:40.138866 2988 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" Jan 23 00:10:40.139424 kubelet[2988]: I0123 00:10:40.139405 2988 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 00:10:40.139758 kubelet[2988]: I0123 00:10:40.139740 2988 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 00:10:40.139803 kubelet[2988]: W0123 00:10:40.139794 2988 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 00:10:40.141103 kubelet[2988]: I0123 00:10:40.141069 2988 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 00:10:40.141103 kubelet[2988]: I0123 00:10:40.141104 2988 server.go:1287] "Started kubelet" Jan 23 00:10:40.146239 kubelet[2988]: I0123 00:10:40.145960 2988 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 00:10:40.148215 kubelet[2988]: I0123 00:10:40.148179 2988 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 00:10:40.148633 kubelet[2988]: E0123 00:10:40.148531 2988 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.38:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.38:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.2-n-aedec2d11e.188d33ab7625d028 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.2-n-aedec2d11e,UID:ci-4459.2.2-n-aedec2d11e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.2-n-aedec2d11e,},FirstTimestamp:2026-01-23 00:10:40.14108676 +0000 UTC m=+0.415055264,LastTimestamp:2026-01-23 00:10:40.14108676 +0000 UTC m=+0.415055264,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.2-n-aedec2d11e,}" Jan 23 00:10:40.148855 kubelet[2988]: I0123 00:10:40.148831 2988 server.go:479] "Adding debug handlers to kubelet server" Jan 23 00:10:40.150610 kubelet[2988]: I0123 00:10:40.149349 2988 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 00:10:40.150610 kubelet[2988]: I0123 00:10:40.149463 2988 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 00:10:40.150610 kubelet[2988]: I0123 00:10:40.149573 2988 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 00:10:40.150610 kubelet[2988]: E0123 00:10:40.149658 2988 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-n-aedec2d11e\" not found" Jan 23 00:10:40.150610 kubelet[2988]: I0123 00:10:40.149730 2988 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 00:10:40.151543 kubelet[2988]: E0123 00:10:40.151515 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-n-aedec2d11e?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="200ms" Jan 23 00:10:40.151662 kubelet[2988]: I0123 00:10:40.151650 2988 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 00:10:40.152602 kubelet[2988]: I0123 00:10:40.152584 2988 factory.go:221] Registration of the systemd container factory successfully Jan 23 00:10:40.152749 kubelet[2988]: I0123 00:10:40.152734 2988 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 00:10:40.153395 kubelet[2988]: I0123 00:10:40.153368 2988 reconciler.go:26] "Reconciler: start to sync state" Jan 23 00:10:40.154407 kubelet[2988]: W0123 00:10:40.154379 2988 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Jan 23 00:10:40.154543 kubelet[2988]: E0123 00:10:40.154525 2988 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" Jan 23 00:10:40.154871 kubelet[2988]: I0123 00:10:40.154856 2988 factory.go:221] Registration of the containerd container factory successfully Jan 23 00:10:40.177995 kubelet[2988]: E0123 00:10:40.177964 2988 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 00:10:40.181376 kubelet[2988]: I0123 00:10:40.181355 2988 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 00:10:40.181376 kubelet[2988]: I0123 00:10:40.181370 2988 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 00:10:40.181376 kubelet[2988]: I0123 00:10:40.181389 2988 state_mem.go:36] "Initialized new in-memory state store" Jan 23 00:10:40.249909 kubelet[2988]: E0123 00:10:40.249857 2988 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-n-aedec2d11e\" not found" Jan 23 00:10:40.250608 kubelet[2988]: I0123 00:10:40.250591 2988 policy_none.go:49] "None policy: Start" Jan 23 00:10:40.250655 kubelet[2988]: I0123 00:10:40.250611 2988 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 00:10:40.250655 kubelet[2988]: I0123 00:10:40.250622 2988 state_mem.go:35] "Initializing new in-memory state store" Jan 23 00:10:40.258565 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 00:10:40.269516 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 00:10:40.270976 kubelet[2988]: I0123 00:10:40.270937 2988 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 00:10:40.272966 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 00:10:40.275645 kubelet[2988]: I0123 00:10:40.275620 2988 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 00:10:40.276232 kubelet[2988]: I0123 00:10:40.275760 2988 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 00:10:40.276232 kubelet[2988]: I0123 00:10:40.275785 2988 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 00:10:40.276232 kubelet[2988]: I0123 00:10:40.275790 2988 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 00:10:40.276232 kubelet[2988]: E0123 00:10:40.275834 2988 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 00:10:40.276507 kubelet[2988]: W0123 00:10:40.276490 2988 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Jan 23 00:10:40.276611 kubelet[2988]: E0123 00:10:40.276594 2988 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" Jan 23 00:10:40.278123 kubelet[2988]: I0123 00:10:40.278101 2988 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 00:10:40.278430 kubelet[2988]: I0123 00:10:40.278411 2988 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 00:10:40.278995 kubelet[2988]: I0123 00:10:40.278923 2988 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 00:10:40.279777 kubelet[2988]: I0123 00:10:40.279681 2988 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 00:10:40.281545 kubelet[2988]: E0123 00:10:40.281495 2988 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 00:10:40.281545 kubelet[2988]: E0123 00:10:40.281534 2988 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.2-n-aedec2d11e\" not found" Jan 23 00:10:40.353181 kubelet[2988]: E0123 00:10:40.353027 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-n-aedec2d11e?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="400ms" Jan 23 00:10:40.381135 kubelet[2988]: I0123 00:10:40.380598 2988 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:40.381431 kubelet[2988]: E0123 00:10:40.381397 2988 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:40.385012 systemd[1]: Created slice kubepods-burstable-podf75aa792e225687f47e6bf3c08ae69ce.slice - libcontainer container kubepods-burstable-podf75aa792e225687f47e6bf3c08ae69ce.slice. Jan 23 00:10:40.392932 kubelet[2988]: E0123 00:10:40.392905 2988 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-aedec2d11e\" not found" node="ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:40.396485 systemd[1]: Created slice kubepods-burstable-pod78b287779590c9b3f4b3c31970c766ba.slice - libcontainer container kubepods-burstable-pod78b287779590c9b3f4b3c31970c766ba.slice. Jan 23 00:10:40.408418 kubelet[2988]: E0123 00:10:40.408361 2988 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-aedec2d11e\" not found" node="ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:40.410584 systemd[1]: Created slice kubepods-burstable-pod65e550ae7a10a15543605e30c3a76ab6.slice - libcontainer container kubepods-burstable-pod65e550ae7a10a15543605e30c3a76ab6.slice. Jan 23 00:10:40.414274 kubelet[2988]: E0123 00:10:40.413488 2988 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-aedec2d11e\" not found" node="ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:40.455482 kubelet[2988]: I0123 00:10:40.455445 2988 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f75aa792e225687f47e6bf3c08ae69ce-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-n-aedec2d11e\" (UID: \"f75aa792e225687f47e6bf3c08ae69ce\") " pod="kube-system/kube-scheduler-ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:40.455732 kubelet[2988]: I0123 00:10:40.455717 2988 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/78b287779590c9b3f4b3c31970c766ba-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-n-aedec2d11e\" (UID: \"78b287779590c9b3f4b3c31970c766ba\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:40.455857 kubelet[2988]: I0123 00:10:40.455843 2988 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/78b287779590c9b3f4b3c31970c766ba-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-n-aedec2d11e\" (UID: \"78b287779590c9b3f4b3c31970c766ba\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:40.455946 kubelet[2988]: I0123 00:10:40.455934 2988 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/78b287779590c9b3f4b3c31970c766ba-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-n-aedec2d11e\" (UID: \"78b287779590c9b3f4b3c31970c766ba\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:40.456040 kubelet[2988]: I0123 00:10:40.456030 2988 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/65e550ae7a10a15543605e30c3a76ab6-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-n-aedec2d11e\" (UID: \"65e550ae7a10a15543605e30c3a76ab6\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:40.456131 kubelet[2988]: I0123 00:10:40.456122 2988 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/65e550ae7a10a15543605e30c3a76ab6-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-n-aedec2d11e\" (UID: \"65e550ae7a10a15543605e30c3a76ab6\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:40.456227 kubelet[2988]: I0123 00:10:40.456216 2988 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/65e550ae7a10a15543605e30c3a76ab6-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-n-aedec2d11e\" (UID: \"65e550ae7a10a15543605e30c3a76ab6\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:40.456346 kubelet[2988]: I0123 00:10:40.456335 2988 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/65e550ae7a10a15543605e30c3a76ab6-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-n-aedec2d11e\" (UID: \"65e550ae7a10a15543605e30c3a76ab6\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:40.456444 kubelet[2988]: I0123 00:10:40.456433 2988 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/65e550ae7a10a15543605e30c3a76ab6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-n-aedec2d11e\" (UID: \"65e550ae7a10a15543605e30c3a76ab6\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:40.583691 kubelet[2988]: I0123 00:10:40.583616 2988 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:40.584042 kubelet[2988]: E0123 00:10:40.584013 2988 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:40.695022 containerd[1893]: time="2026-01-23T00:10:40.694770485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-n-aedec2d11e,Uid:f75aa792e225687f47e6bf3c08ae69ce,Namespace:kube-system,Attempt:0,}" Jan 23 00:10:40.710470 containerd[1893]: time="2026-01-23T00:10:40.710426797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-n-aedec2d11e,Uid:78b287779590c9b3f4b3c31970c766ba,Namespace:kube-system,Attempt:0,}" Jan 23 00:10:40.716113 containerd[1893]: time="2026-01-23T00:10:40.716071005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-n-aedec2d11e,Uid:65e550ae7a10a15543605e30c3a76ab6,Namespace:kube-system,Attempt:0,}" Jan 23 00:10:40.740895 containerd[1893]: time="2026-01-23T00:10:40.740847434Z" level=info msg="connecting to shim 65ee450c4b65939861d52fb2a2eb85a087c95c3b1b336b14d69241f245c7a6bb" address="unix:///run/containerd/s/9364870b90e5fc0a21b37df78ee0998579fe87285b192eaa2004853d413b44cd" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:10:40.753933 kubelet[2988]: E0123 00:10:40.753896 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-n-aedec2d11e?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="800ms" Jan 23 00:10:40.766400 systemd[1]: Started cri-containerd-65ee450c4b65939861d52fb2a2eb85a087c95c3b1b336b14d69241f245c7a6bb.scope - libcontainer container 65ee450c4b65939861d52fb2a2eb85a087c95c3b1b336b14d69241f245c7a6bb. Jan 23 00:10:40.770158 containerd[1893]: time="2026-01-23T00:10:40.770113372Z" level=info msg="connecting to shim bea0e9eb0f066fbde141a960593e6664d6302378e8e6e267e6bc6d1b0d3c56b0" address="unix:///run/containerd/s/67f81b008dbda08f3982ae3f176dc3c98eec32d6f9181c9f5bade9e6b260c4d9" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:10:40.794232 containerd[1893]: time="2026-01-23T00:10:40.794191387Z" level=info msg="connecting to shim 1910a30d7bfc2ae3e92d9465d49643d870d3dba9db023e43a20735a871a5469b" address="unix:///run/containerd/s/8a8c5542ffd9ac50a3e85ded82bbc95cf8cd9e78b5eca0cec0f627a6980d2174" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:10:40.797413 systemd[1]: Started cri-containerd-bea0e9eb0f066fbde141a960593e6664d6302378e8e6e267e6bc6d1b0d3c56b0.scope - libcontainer container bea0e9eb0f066fbde141a960593e6664d6302378e8e6e267e6bc6d1b0d3c56b0. Jan 23 00:10:40.825083 systemd[1]: Started cri-containerd-1910a30d7bfc2ae3e92d9465d49643d870d3dba9db023e43a20735a871a5469b.scope - libcontainer container 1910a30d7bfc2ae3e92d9465d49643d870d3dba9db023e43a20735a871a5469b. Jan 23 00:10:40.840277 containerd[1893]: time="2026-01-23T00:10:40.839930758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-n-aedec2d11e,Uid:f75aa792e225687f47e6bf3c08ae69ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"65ee450c4b65939861d52fb2a2eb85a087c95c3b1b336b14d69241f245c7a6bb\"" Jan 23 00:10:40.846192 containerd[1893]: time="2026-01-23T00:10:40.846156712Z" level=info msg="CreateContainer within sandbox \"65ee450c4b65939861d52fb2a2eb85a087c95c3b1b336b14d69241f245c7a6bb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 00:10:40.848590 containerd[1893]: time="2026-01-23T00:10:40.848552507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-n-aedec2d11e,Uid:78b287779590c9b3f4b3c31970c766ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"bea0e9eb0f066fbde141a960593e6664d6302378e8e6e267e6bc6d1b0d3c56b0\"" Jan 23 00:10:40.856909 containerd[1893]: time="2026-01-23T00:10:40.856833990Z" level=info msg="CreateContainer within sandbox \"bea0e9eb0f066fbde141a960593e6664d6302378e8e6e267e6bc6d1b0d3c56b0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 00:10:40.869166 containerd[1893]: time="2026-01-23T00:10:40.869109421Z" level=info msg="Container e6ac618652628f78201a8b41183bbb3e4d35045ddee843b0963640f417e33e68: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:10:40.881364 containerd[1893]: time="2026-01-23T00:10:40.881316425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-n-aedec2d11e,Uid:65e550ae7a10a15543605e30c3a76ab6,Namespace:kube-system,Attempt:0,} returns sandbox id \"1910a30d7bfc2ae3e92d9465d49643d870d3dba9db023e43a20735a871a5469b\"" Jan 23 00:10:40.885987 containerd[1893]: time="2026-01-23T00:10:40.885950866Z" level=info msg="CreateContainer within sandbox \"1910a30d7bfc2ae3e92d9465d49643d870d3dba9db023e43a20735a871a5469b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 00:10:40.888218 containerd[1893]: time="2026-01-23T00:10:40.887823797Z" level=info msg="Container ee4e325284341e7fee9448cec1eca3a5f68cf9380c4bf5f43b98e2d010394545: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:10:40.888218 containerd[1893]: time="2026-01-23T00:10:40.887852445Z" level=info msg="CreateContainer within sandbox \"65ee450c4b65939861d52fb2a2eb85a087c95c3b1b336b14d69241f245c7a6bb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e6ac618652628f78201a8b41183bbb3e4d35045ddee843b0963640f417e33e68\"" Jan 23 00:10:40.888795 containerd[1893]: time="2026-01-23T00:10:40.888698768Z" level=info msg="StartContainer for \"e6ac618652628f78201a8b41183bbb3e4d35045ddee843b0963640f417e33e68\"" Jan 23 00:10:40.889809 containerd[1893]: time="2026-01-23T00:10:40.889783114Z" level=info msg="connecting to shim e6ac618652628f78201a8b41183bbb3e4d35045ddee843b0963640f417e33e68" address="unix:///run/containerd/s/9364870b90e5fc0a21b37df78ee0998579fe87285b192eaa2004853d413b44cd" protocol=ttrpc version=3 Jan 23 00:10:40.907407 systemd[1]: Started cri-containerd-e6ac618652628f78201a8b41183bbb3e4d35045ddee843b0963640f417e33e68.scope - libcontainer container e6ac618652628f78201a8b41183bbb3e4d35045ddee843b0963640f417e33e68. Jan 23 00:10:40.921633 containerd[1893]: time="2026-01-23T00:10:40.921501663Z" level=info msg="CreateContainer within sandbox \"bea0e9eb0f066fbde141a960593e6664d6302378e8e6e267e6bc6d1b0d3c56b0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ee4e325284341e7fee9448cec1eca3a5f68cf9380c4bf5f43b98e2d010394545\"" Jan 23 00:10:40.922359 containerd[1893]: time="2026-01-23T00:10:40.922188469Z" level=info msg="StartContainer for \"ee4e325284341e7fee9448cec1eca3a5f68cf9380c4bf5f43b98e2d010394545\"" Jan 23 00:10:40.925088 containerd[1893]: time="2026-01-23T00:10:40.925054014Z" level=info msg="connecting to shim ee4e325284341e7fee9448cec1eca3a5f68cf9380c4bf5f43b98e2d010394545" address="unix:///run/containerd/s/67f81b008dbda08f3982ae3f176dc3c98eec32d6f9181c9f5bade9e6b260c4d9" protocol=ttrpc version=3 Jan 23 00:10:40.928429 containerd[1893]: time="2026-01-23T00:10:40.928400119Z" level=info msg="Container a5754647ba0a64d28c8d2abebef1ac5e0076502b49978df170d8bbca302522f3: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:10:40.943615 systemd[1]: Started cri-containerd-ee4e325284341e7fee9448cec1eca3a5f68cf9380c4bf5f43b98e2d010394545.scope - libcontainer container ee4e325284341e7fee9448cec1eca3a5f68cf9380c4bf5f43b98e2d010394545. Jan 23 00:10:40.953358 containerd[1893]: time="2026-01-23T00:10:40.952237278Z" level=info msg="CreateContainer within sandbox \"1910a30d7bfc2ae3e92d9465d49643d870d3dba9db023e43a20735a871a5469b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a5754647ba0a64d28c8d2abebef1ac5e0076502b49978df170d8bbca302522f3\"" Jan 23 00:10:40.953733 containerd[1893]: time="2026-01-23T00:10:40.953697124Z" level=info msg="StartContainer for \"a5754647ba0a64d28c8d2abebef1ac5e0076502b49978df170d8bbca302522f3\"" Jan 23 00:10:40.956055 containerd[1893]: time="2026-01-23T00:10:40.956029077Z" level=info msg="connecting to shim a5754647ba0a64d28c8d2abebef1ac5e0076502b49978df170d8bbca302522f3" address="unix:///run/containerd/s/8a8c5542ffd9ac50a3e85ded82bbc95cf8cd9e78b5eca0cec0f627a6980d2174" protocol=ttrpc version=3 Jan 23 00:10:40.957178 containerd[1893]: time="2026-01-23T00:10:40.957140199Z" level=info msg="StartContainer for \"e6ac618652628f78201a8b41183bbb3e4d35045ddee843b0963640f417e33e68\" returns successfully" Jan 23 00:10:40.961991 kubelet[2988]: W0123 00:10:40.961821 2988 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-n-aedec2d11e&limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Jan 23 00:10:40.961991 kubelet[2988]: E0123 00:10:40.961890 2988 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-n-aedec2d11e&limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" Jan 23 00:10:40.982581 systemd[1]: Started cri-containerd-a5754647ba0a64d28c8d2abebef1ac5e0076502b49978df170d8bbca302522f3.scope - libcontainer container a5754647ba0a64d28c8d2abebef1ac5e0076502b49978df170d8bbca302522f3. Jan 23 00:10:40.987960 kubelet[2988]: I0123 00:10:40.987691 2988 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:40.990203 kubelet[2988]: E0123 00:10:40.990167 2988 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:41.008790 containerd[1893]: time="2026-01-23T00:10:41.008731657Z" level=info msg="StartContainer for \"ee4e325284341e7fee9448cec1eca3a5f68cf9380c4bf5f43b98e2d010394545\" returns successfully" Jan 23 00:10:41.047524 containerd[1893]: time="2026-01-23T00:10:41.047461698Z" level=info msg="StartContainer for \"a5754647ba0a64d28c8d2abebef1ac5e0076502b49978df170d8bbca302522f3\" returns successfully" Jan 23 00:10:41.285753 kubelet[2988]: E0123 00:10:41.285724 2988 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-aedec2d11e\" not found" node="ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:41.291295 kubelet[2988]: E0123 00:10:41.291109 2988 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-aedec2d11e\" not found" node="ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:41.292861 kubelet[2988]: E0123 00:10:41.292756 2988 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-aedec2d11e\" not found" node="ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:41.793067 kubelet[2988]: I0123 00:10:41.792740 2988 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:42.259883 kubelet[2988]: E0123 00:10:42.259852 2988 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.2.2-n-aedec2d11e\" not found" node="ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:42.293802 kubelet[2988]: E0123 00:10:42.293729 2988 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-aedec2d11e\" not found" node="ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:42.295002 kubelet[2988]: E0123 00:10:42.294803 2988 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-aedec2d11e\" not found" node="ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:42.342522 kubelet[2988]: I0123 00:10:42.342463 2988 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:42.343432 kubelet[2988]: E0123 00:10:42.343375 2988 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459.2.2-n-aedec2d11e\": node \"ci-4459.2.2-n-aedec2d11e\" not found" Jan 23 00:10:42.463830 kubelet[2988]: E0123 00:10:42.463794 2988 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-n-aedec2d11e\" not found" Jan 23 00:10:42.564926 kubelet[2988]: E0123 00:10:42.564498 2988 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-n-aedec2d11e\" not found" Jan 23 00:10:42.665574 kubelet[2988]: E0123 00:10:42.665527 2988 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-n-aedec2d11e\" not found" Jan 23 00:10:42.850601 kubelet[2988]: I0123 00:10:42.850314 2988 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:42.946010 kubelet[2988]: E0123 00:10:42.945931 2988 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-n-aedec2d11e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:42.946010 kubelet[2988]: I0123 00:10:42.945965 2988 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:42.949931 kubelet[2988]: E0123 00:10:42.949891 2988 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-n-aedec2d11e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:42.950218 kubelet[2988]: I0123 00:10:42.950085 2988 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:42.953646 kubelet[2988]: E0123 00:10:42.953616 2988 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.2-n-aedec2d11e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:43.142269 kubelet[2988]: I0123 00:10:43.141645 2988 apiserver.go:52] "Watching apiserver" Jan 23 00:10:43.152674 kubelet[2988]: I0123 00:10:43.152628 2988 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 00:10:43.928983 kubelet[2988]: I0123 00:10:43.928946 2988 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:43.955155 kubelet[2988]: W0123 00:10:43.954868 2988 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 00:10:44.734792 kubelet[2988]: I0123 00:10:44.734760 2988 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:44.742977 kubelet[2988]: W0123 00:10:44.742936 2988 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 00:10:45.215319 systemd[1]: Reload requested from client PID 3257 ('systemctl') (unit session-7.scope)... Jan 23 00:10:45.215334 systemd[1]: Reloading... Jan 23 00:10:45.289320 zram_generator::config[3301]: No configuration found. Jan 23 00:10:45.462519 systemd[1]: Reloading finished in 246 ms. Jan 23 00:10:45.484275 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:10:45.499077 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 00:10:45.499342 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:10:45.499405 systemd[1]: kubelet.service: Consumed 663ms CPU time, 125.6M memory peak. Jan 23 00:10:45.501746 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:10:45.655790 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:10:45.665735 (kubelet)[3369]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 00:10:45.766209 kubelet[3369]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 00:10:45.766209 kubelet[3369]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 00:10:45.766209 kubelet[3369]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 00:10:45.767294 kubelet[3369]: I0123 00:10:45.766629 3369 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 00:10:45.773596 kubelet[3369]: I0123 00:10:45.773568 3369 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 00:10:45.773714 kubelet[3369]: I0123 00:10:45.773704 3369 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 00:10:45.773939 kubelet[3369]: I0123 00:10:45.773925 3369 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 00:10:45.774921 kubelet[3369]: I0123 00:10:45.774900 3369 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 23 00:10:45.776921 kubelet[3369]: I0123 00:10:45.776900 3369 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 00:10:45.780640 kubelet[3369]: I0123 00:10:45.780625 3369 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 00:10:45.783205 kubelet[3369]: I0123 00:10:45.783171 3369 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 00:10:45.783731 kubelet[3369]: I0123 00:10:45.783696 3369 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 00:10:45.783858 kubelet[3369]: I0123 00:10:45.783731 3369 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-n-aedec2d11e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 00:10:45.783926 kubelet[3369]: I0123 00:10:45.783865 3369 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 00:10:45.783926 kubelet[3369]: I0123 00:10:45.783873 3369 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 00:10:45.783926 kubelet[3369]: I0123 00:10:45.783910 3369 state_mem.go:36] "Initialized new in-memory state store" Jan 23 00:10:45.784219 kubelet[3369]: I0123 00:10:45.784029 3369 kubelet.go:446] "Attempting to sync node with API server" Jan 23 00:10:45.784219 kubelet[3369]: I0123 00:10:45.784040 3369 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 00:10:45.784219 kubelet[3369]: I0123 00:10:45.784057 3369 kubelet.go:352] "Adding apiserver pod source" Jan 23 00:10:45.784219 kubelet[3369]: I0123 00:10:45.784068 3369 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 00:10:45.789571 kubelet[3369]: I0123 00:10:45.789458 3369 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 00:10:45.789862 kubelet[3369]: I0123 00:10:45.789824 3369 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 00:10:45.790383 kubelet[3369]: I0123 00:10:45.790214 3369 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 00:10:45.790383 kubelet[3369]: I0123 00:10:45.790241 3369 server.go:1287] "Started kubelet" Jan 23 00:10:45.795309 kubelet[3369]: I0123 00:10:45.795288 3369 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 00:10:45.799738 kubelet[3369]: I0123 00:10:45.799706 3369 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 00:10:45.801274 kubelet[3369]: I0123 00:10:45.800553 3369 server.go:479] "Adding debug handlers to kubelet server" Jan 23 00:10:45.801274 kubelet[3369]: I0123 00:10:45.801189 3369 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 00:10:45.801500 kubelet[3369]: I0123 00:10:45.801487 3369 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 00:10:45.801776 kubelet[3369]: I0123 00:10:45.801759 3369 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 00:10:45.802667 kubelet[3369]: I0123 00:10:45.802653 3369 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 00:10:45.802898 kubelet[3369]: E0123 00:10:45.802880 3369 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-n-aedec2d11e\" not found" Jan 23 00:10:45.804176 kubelet[3369]: I0123 00:10:45.804149 3369 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 00:10:45.804369 kubelet[3369]: I0123 00:10:45.804356 3369 reconciler.go:26] "Reconciler: start to sync state" Jan 23 00:10:45.805781 kubelet[3369]: I0123 00:10:45.805750 3369 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 00:10:45.806675 kubelet[3369]: I0123 00:10:45.806657 3369 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 00:10:45.806764 kubelet[3369]: I0123 00:10:45.806755 3369 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 00:10:45.806816 kubelet[3369]: I0123 00:10:45.806809 3369 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 00:10:45.806854 kubelet[3369]: I0123 00:10:45.806848 3369 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 00:10:45.806941 kubelet[3369]: E0123 00:10:45.806925 3369 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 00:10:45.812789 kubelet[3369]: I0123 00:10:45.812769 3369 factory.go:221] Registration of the containerd container factory successfully Jan 23 00:10:45.812882 kubelet[3369]: I0123 00:10:45.812873 3369 factory.go:221] Registration of the systemd container factory successfully Jan 23 00:10:45.813012 kubelet[3369]: I0123 00:10:45.812994 3369 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 00:10:45.815949 kubelet[3369]: E0123 00:10:45.815907 3369 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 00:10:45.863528 kubelet[3369]: I0123 00:10:45.863503 3369 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 00:10:45.863687 kubelet[3369]: I0123 00:10:45.863675 3369 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 00:10:45.863743 kubelet[3369]: I0123 00:10:45.863736 3369 state_mem.go:36] "Initialized new in-memory state store" Jan 23 00:10:45.863937 kubelet[3369]: I0123 00:10:45.863922 3369 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 00:10:45.864010 kubelet[3369]: I0123 00:10:45.863990 3369 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 00:10:45.864051 kubelet[3369]: I0123 00:10:45.864044 3369 policy_none.go:49] "None policy: Start" Jan 23 00:10:45.864092 kubelet[3369]: I0123 00:10:45.864085 3369 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 00:10:45.864157 kubelet[3369]: I0123 00:10:45.864150 3369 state_mem.go:35] "Initializing new in-memory state store" Jan 23 00:10:45.864346 kubelet[3369]: I0123 00:10:45.864330 3369 state_mem.go:75] "Updated machine memory state" Jan 23 00:10:45.867794 kubelet[3369]: I0123 00:10:45.867769 3369 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 00:10:45.868498 kubelet[3369]: I0123 00:10:45.868485 3369 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 00:10:45.868817 kubelet[3369]: I0123 00:10:45.868785 3369 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 00:10:45.869227 kubelet[3369]: I0123 00:10:45.869213 3369 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 00:10:45.872475 kubelet[3369]: E0123 00:10:45.871416 3369 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 00:10:45.908025 kubelet[3369]: I0123 00:10:45.907982 3369 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:45.908615 kubelet[3369]: I0123 00:10:45.908302 3369 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:45.908615 kubelet[3369]: I0123 00:10:45.908384 3369 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:45.916870 kubelet[3369]: W0123 00:10:45.916818 3369 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 00:10:45.923660 kubelet[3369]: W0123 00:10:45.923444 3369 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 00:10:45.923660 kubelet[3369]: W0123 00:10:45.923476 3369 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 00:10:45.923660 kubelet[3369]: E0123 00:10:45.923532 3369 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.2-n-aedec2d11e\" already exists" pod="kube-system/kube-controller-manager-ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:45.923660 kubelet[3369]: E0123 00:10:45.923594 3369 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-n-aedec2d11e\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:45.972525 kubelet[3369]: I0123 00:10:45.972231 3369 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:45.987177 kubelet[3369]: I0123 00:10:45.987039 3369 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:45.987442 kubelet[3369]: I0123 00:10:45.987355 3369 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:46.105397 kubelet[3369]: I0123 00:10:46.105285 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/65e550ae7a10a15543605e30c3a76ab6-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-n-aedec2d11e\" (UID: \"65e550ae7a10a15543605e30c3a76ab6\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:46.105751 kubelet[3369]: I0123 00:10:46.105545 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/65e550ae7a10a15543605e30c3a76ab6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-n-aedec2d11e\" (UID: \"65e550ae7a10a15543605e30c3a76ab6\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:46.105751 kubelet[3369]: I0123 00:10:46.105571 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/78b287779590c9b3f4b3c31970c766ba-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-n-aedec2d11e\" (UID: \"78b287779590c9b3f4b3c31970c766ba\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:46.105751 kubelet[3369]: I0123 00:10:46.105584 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/78b287779590c9b3f4b3c31970c766ba-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-n-aedec2d11e\" (UID: \"78b287779590c9b3f4b3c31970c766ba\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:46.105751 kubelet[3369]: I0123 00:10:46.105630 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/65e550ae7a10a15543605e30c3a76ab6-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-n-aedec2d11e\" (UID: \"65e550ae7a10a15543605e30c3a76ab6\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:46.105751 kubelet[3369]: I0123 00:10:46.105646 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f75aa792e225687f47e6bf3c08ae69ce-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-n-aedec2d11e\" (UID: \"f75aa792e225687f47e6bf3c08ae69ce\") " pod="kube-system/kube-scheduler-ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:46.105870 kubelet[3369]: I0123 00:10:46.105691 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/78b287779590c9b3f4b3c31970c766ba-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-n-aedec2d11e\" (UID: \"78b287779590c9b3f4b3c31970c766ba\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:46.105870 kubelet[3369]: I0123 00:10:46.105703 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/65e550ae7a10a15543605e30c3a76ab6-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-n-aedec2d11e\" (UID: \"65e550ae7a10a15543605e30c3a76ab6\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:46.105870 kubelet[3369]: I0123 00:10:46.105715 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/65e550ae7a10a15543605e30c3a76ab6-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-n-aedec2d11e\" (UID: \"65e550ae7a10a15543605e30c3a76ab6\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:46.786229 kubelet[3369]: I0123 00:10:46.786187 3369 apiserver.go:52] "Watching apiserver" Jan 23 00:10:46.805189 kubelet[3369]: I0123 00:10:46.805146 3369 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 00:10:46.848879 kubelet[3369]: I0123 00:10:46.848483 3369 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:46.849948 kubelet[3369]: I0123 00:10:46.849850 3369 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:46.865848 kubelet[3369]: W0123 00:10:46.865815 3369 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 00:10:46.865979 kubelet[3369]: E0123 00:10:46.865875 3369 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-n-aedec2d11e\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:46.870805 kubelet[3369]: W0123 00:10:46.870769 3369 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 00:10:46.871025 kubelet[3369]: E0123 00:10:46.870894 3369 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-n-aedec2d11e\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.2-n-aedec2d11e" Jan 23 00:10:46.891885 kubelet[3369]: I0123 00:10:46.891689 3369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.2-n-aedec2d11e" podStartSLOduration=3.891654014 podStartE2EDuration="3.891654014s" podCreationTimestamp="2026-01-23 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:10:46.873939903 +0000 UTC m=+1.205180517" watchObservedRunningTime="2026-01-23 00:10:46.891654014 +0000 UTC m=+1.222894628" Jan 23 00:10:46.905276 kubelet[3369]: I0123 00:10:46.905207 3369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.2-n-aedec2d11e" podStartSLOduration=1.9051881640000001 podStartE2EDuration="1.905188164s" podCreationTimestamp="2026-01-23 00:10:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:10:46.892889303 +0000 UTC m=+1.224129917" watchObservedRunningTime="2026-01-23 00:10:46.905188164 +0000 UTC m=+1.236428786" Jan 23 00:10:46.920615 sudo[2330]: pam_unix(sudo:session): session closed for user root Jan 23 00:10:46.924278 kubelet[3369]: I0123 00:10:46.922382 3369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.2-n-aedec2d11e" podStartSLOduration=2.922364498 podStartE2EDuration="2.922364498s" podCreationTimestamp="2026-01-23 00:10:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:10:46.905810984 +0000 UTC m=+1.237051606" watchObservedRunningTime="2026-01-23 00:10:46.922364498 +0000 UTC m=+1.253605112" Jan 23 00:10:46.997632 sshd[2329]: Connection closed by 10.200.16.10 port 44798 Jan 23 00:10:46.998245 sshd-session[2326]: pam_unix(sshd:session): session closed for user core Jan 23 00:10:47.002737 systemd-logind[1875]: Session 7 logged out. Waiting for processes to exit. Jan 23 00:10:47.003360 systemd[1]: sshd@4-10.200.20.38:22-10.200.16.10:44798.service: Deactivated successfully. Jan 23 00:10:47.006822 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 00:10:47.007025 systemd[1]: session-7.scope: Consumed 2.376s CPU time, 218.3M memory peak. Jan 23 00:10:47.009119 systemd-logind[1875]: Removed session 7. Jan 23 00:10:50.113081 kubelet[3369]: I0123 00:10:50.113039 3369 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 00:10:50.113890 containerd[1893]: time="2026-01-23T00:10:50.113734235Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 00:10:50.114534 kubelet[3369]: I0123 00:10:50.114322 3369 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 00:10:50.739827 systemd[1]: Created slice kubepods-besteffort-podaaa335b2_a6b7_4e85_97de_7547e1c73ee9.slice - libcontainer container kubepods-besteffort-podaaa335b2_a6b7_4e85_97de_7547e1c73ee9.slice. Jan 23 00:10:50.762113 systemd[1]: Created slice kubepods-burstable-pode5aa0608_87e0_4889_977a_72f9629e4821.slice - libcontainer container kubepods-burstable-pode5aa0608_87e0_4889_977a_72f9629e4821.slice. Jan 23 00:10:50.834757 kubelet[3369]: I0123 00:10:50.834708 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aaa335b2-a6b7-4e85-97de-7547e1c73ee9-kube-proxy\") pod \"kube-proxy-clcnl\" (UID: \"aaa335b2-a6b7-4e85-97de-7547e1c73ee9\") " pod="kube-system/kube-proxy-clcnl" Jan 23 00:10:50.834757 kubelet[3369]: I0123 00:10:50.834753 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmvbs\" (UniqueName: \"kubernetes.io/projected/aaa335b2-a6b7-4e85-97de-7547e1c73ee9-kube-api-access-kmvbs\") pod \"kube-proxy-clcnl\" (UID: \"aaa335b2-a6b7-4e85-97de-7547e1c73ee9\") " pod="kube-system/kube-proxy-clcnl" Jan 23 00:10:50.834757 kubelet[3369]: I0123 00:10:50.834772 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/e5aa0608-87e0-4889-977a-72f9629e4821-cni-plugin\") pod \"kube-flannel-ds-62zzp\" (UID: \"e5aa0608-87e0-4889-977a-72f9629e4821\") " pod="kube-flannel/kube-flannel-ds-62zzp" Jan 23 00:10:50.834956 kubelet[3369]: I0123 00:10:50.834784 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5aa0608-87e0-4889-977a-72f9629e4821-xtables-lock\") pod \"kube-flannel-ds-62zzp\" (UID: \"e5aa0608-87e0-4889-977a-72f9629e4821\") " pod="kube-flannel/kube-flannel-ds-62zzp" Jan 23 00:10:50.834956 kubelet[3369]: I0123 00:10:50.834794 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/e5aa0608-87e0-4889-977a-72f9629e4821-cni\") pod \"kube-flannel-ds-62zzp\" (UID: \"e5aa0608-87e0-4889-977a-72f9629e4821\") " pod="kube-flannel/kube-flannel-ds-62zzp" Jan 23 00:10:50.834956 kubelet[3369]: I0123 00:10:50.834804 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfzfb\" (UniqueName: \"kubernetes.io/projected/e5aa0608-87e0-4889-977a-72f9629e4821-kube-api-access-gfzfb\") pod \"kube-flannel-ds-62zzp\" (UID: \"e5aa0608-87e0-4889-977a-72f9629e4821\") " pod="kube-flannel/kube-flannel-ds-62zzp" Jan 23 00:10:50.834956 kubelet[3369]: I0123 00:10:50.834812 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aaa335b2-a6b7-4e85-97de-7547e1c73ee9-xtables-lock\") pod \"kube-proxy-clcnl\" (UID: \"aaa335b2-a6b7-4e85-97de-7547e1c73ee9\") " pod="kube-system/kube-proxy-clcnl" Jan 23 00:10:50.834956 kubelet[3369]: I0123 00:10:50.834824 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e5aa0608-87e0-4889-977a-72f9629e4821-run\") pod \"kube-flannel-ds-62zzp\" (UID: \"e5aa0608-87e0-4889-977a-72f9629e4821\") " pod="kube-flannel/kube-flannel-ds-62zzp" Jan 23 00:10:50.835045 kubelet[3369]: I0123 00:10:50.834834 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/e5aa0608-87e0-4889-977a-72f9629e4821-flannel-cfg\") pod \"kube-flannel-ds-62zzp\" (UID: \"e5aa0608-87e0-4889-977a-72f9629e4821\") " pod="kube-flannel/kube-flannel-ds-62zzp" Jan 23 00:10:50.835045 kubelet[3369]: I0123 00:10:50.834843 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aaa335b2-a6b7-4e85-97de-7547e1c73ee9-lib-modules\") pod \"kube-proxy-clcnl\" (UID: \"aaa335b2-a6b7-4e85-97de-7547e1c73ee9\") " pod="kube-system/kube-proxy-clcnl" Jan 23 00:10:50.947290 kubelet[3369]: E0123 00:10:50.947222 3369 projected.go:288] Couldn't get configMap kube-flannel/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 23 00:10:50.947822 kubelet[3369]: E0123 00:10:50.947421 3369 projected.go:194] Error preparing data for projected volume kube-api-access-gfzfb for pod kube-flannel/kube-flannel-ds-62zzp: configmap "kube-root-ca.crt" not found Jan 23 00:10:50.947822 kubelet[3369]: E0123 00:10:50.947633 3369 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e5aa0608-87e0-4889-977a-72f9629e4821-kube-api-access-gfzfb podName:e5aa0608-87e0-4889-977a-72f9629e4821 nodeName:}" failed. No retries permitted until 2026-01-23 00:10:51.447462085 +0000 UTC m=+5.778702699 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gfzfb" (UniqueName: "kubernetes.io/projected/e5aa0608-87e0-4889-977a-72f9629e4821-kube-api-access-gfzfb") pod "kube-flannel-ds-62zzp" (UID: "e5aa0608-87e0-4889-977a-72f9629e4821") : configmap "kube-root-ca.crt" not found Jan 23 00:10:50.949548 kubelet[3369]: E0123 00:10:50.949522 3369 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 23 00:10:50.949548 kubelet[3369]: E0123 00:10:50.949546 3369 projected.go:194] Error preparing data for projected volume kube-api-access-kmvbs for pod kube-system/kube-proxy-clcnl: configmap "kube-root-ca.crt" not found Jan 23 00:10:50.949642 kubelet[3369]: E0123 00:10:50.949576 3369 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/aaa335b2-a6b7-4e85-97de-7547e1c73ee9-kube-api-access-kmvbs podName:aaa335b2-a6b7-4e85-97de-7547e1c73ee9 nodeName:}" failed. No retries permitted until 2026-01-23 00:10:51.449564923 +0000 UTC m=+5.780805545 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kmvbs" (UniqueName: "kubernetes.io/projected/aaa335b2-a6b7-4e85-97de-7547e1c73ee9-kube-api-access-kmvbs") pod "kube-proxy-clcnl" (UID: "aaa335b2-a6b7-4e85-97de-7547e1c73ee9") : configmap "kube-root-ca.crt" not found Jan 23 00:10:51.661716 containerd[1893]: time="2026-01-23T00:10:51.661384950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-clcnl,Uid:aaa335b2-a6b7-4e85-97de-7547e1c73ee9,Namespace:kube-system,Attempt:0,}" Jan 23 00:10:51.666505 containerd[1893]: time="2026-01-23T00:10:51.666461997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-62zzp,Uid:e5aa0608-87e0-4889-977a-72f9629e4821,Namespace:kube-flannel,Attempt:0,}" Jan 23 00:10:51.703697 containerd[1893]: time="2026-01-23T00:10:51.703355012Z" level=info msg="connecting to shim 44f06fe5f5b4cd8fd4ee2b8a6ba411d24b197e6ff420e52e9089a13a9a1466dc" address="unix:///run/containerd/s/36559e1314f712b1f75ef65d1676b9aadd187c7d83891696efcdec337950b3cc" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:10:51.716311 containerd[1893]: time="2026-01-23T00:10:51.716235124Z" level=info msg="connecting to shim 7ece94518d4d877bb386395666a3df1f2fc8d8e491eb57ae385eed1999f6c2d4" address="unix:///run/containerd/s/ec5a28bd3790d7b0d1e991ec1252b65b827730cfa9ec663ba1538a69fc430c1b" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:10:51.734434 systemd[1]: Started cri-containerd-44f06fe5f5b4cd8fd4ee2b8a6ba411d24b197e6ff420e52e9089a13a9a1466dc.scope - libcontainer container 44f06fe5f5b4cd8fd4ee2b8a6ba411d24b197e6ff420e52e9089a13a9a1466dc. Jan 23 00:10:51.738534 systemd[1]: Started cri-containerd-7ece94518d4d877bb386395666a3df1f2fc8d8e491eb57ae385eed1999f6c2d4.scope - libcontainer container 7ece94518d4d877bb386395666a3df1f2fc8d8e491eb57ae385eed1999f6c2d4. Jan 23 00:10:51.776705 containerd[1893]: time="2026-01-23T00:10:51.776658794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-clcnl,Uid:aaa335b2-a6b7-4e85-97de-7547e1c73ee9,Namespace:kube-system,Attempt:0,} returns sandbox id \"44f06fe5f5b4cd8fd4ee2b8a6ba411d24b197e6ff420e52e9089a13a9a1466dc\"" Jan 23 00:10:51.781997 containerd[1893]: time="2026-01-23T00:10:51.781939888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-62zzp,Uid:e5aa0608-87e0-4889-977a-72f9629e4821,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"7ece94518d4d877bb386395666a3df1f2fc8d8e491eb57ae385eed1999f6c2d4\"" Jan 23 00:10:51.782389 containerd[1893]: time="2026-01-23T00:10:51.782334461Z" level=info msg="CreateContainer within sandbox \"44f06fe5f5b4cd8fd4ee2b8a6ba411d24b197e6ff420e52e9089a13a9a1466dc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 00:10:51.784387 containerd[1893]: time="2026-01-23T00:10:51.784358976Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 23 00:10:51.803706 containerd[1893]: time="2026-01-23T00:10:51.803662947Z" level=info msg="Container 5acd72f97a3268a5f800021ebb2d130444b3caaa3f0b27ba3f34042c1a6ca780: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:10:51.818055 containerd[1893]: time="2026-01-23T00:10:51.818006796Z" level=info msg="CreateContainer within sandbox \"44f06fe5f5b4cd8fd4ee2b8a6ba411d24b197e6ff420e52e9089a13a9a1466dc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5acd72f97a3268a5f800021ebb2d130444b3caaa3f0b27ba3f34042c1a6ca780\"" Jan 23 00:10:51.819166 containerd[1893]: time="2026-01-23T00:10:51.819132729Z" level=info msg="StartContainer for \"5acd72f97a3268a5f800021ebb2d130444b3caaa3f0b27ba3f34042c1a6ca780\"" Jan 23 00:10:51.821682 containerd[1893]: time="2026-01-23T00:10:51.821653084Z" level=info msg="connecting to shim 5acd72f97a3268a5f800021ebb2d130444b3caaa3f0b27ba3f34042c1a6ca780" address="unix:///run/containerd/s/36559e1314f712b1f75ef65d1676b9aadd187c7d83891696efcdec337950b3cc" protocol=ttrpc version=3 Jan 23 00:10:51.841447 systemd[1]: Started cri-containerd-5acd72f97a3268a5f800021ebb2d130444b3caaa3f0b27ba3f34042c1a6ca780.scope - libcontainer container 5acd72f97a3268a5f800021ebb2d130444b3caaa3f0b27ba3f34042c1a6ca780. Jan 23 00:10:51.901874 containerd[1893]: time="2026-01-23T00:10:51.901808124Z" level=info msg="StartContainer for \"5acd72f97a3268a5f800021ebb2d130444b3caaa3f0b27ba3f34042c1a6ca780\" returns successfully" Jan 23 00:10:54.104114 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount594040387.mount: Deactivated successfully. Jan 23 00:10:54.220928 containerd[1893]: time="2026-01-23T00:10:54.220424960Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:10:54.223541 containerd[1893]: time="2026-01-23T00:10:54.223507483Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" Jan 23 00:10:54.226354 containerd[1893]: time="2026-01-23T00:10:54.226325508Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:10:54.230138 containerd[1893]: time="2026-01-23T00:10:54.230096181Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:10:54.230794 containerd[1893]: time="2026-01-23T00:10:54.230496201Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 2.446104025s" Jan 23 00:10:54.230794 containerd[1893]: time="2026-01-23T00:10:54.230524826Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Jan 23 00:10:54.233697 containerd[1893]: time="2026-01-23T00:10:54.233675487Z" level=info msg="CreateContainer within sandbox \"7ece94518d4d877bb386395666a3df1f2fc8d8e491eb57ae385eed1999f6c2d4\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 23 00:10:54.252278 containerd[1893]: time="2026-01-23T00:10:54.251529024Z" level=info msg="Container 7cc5ce039b11c42b2bc528783d918c3d59b9386847b24ce5501e956f21d1f167: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:10:54.264445 containerd[1893]: time="2026-01-23T00:10:54.264405426Z" level=info msg="CreateContainer within sandbox \"7ece94518d4d877bb386395666a3df1f2fc8d8e491eb57ae385eed1999f6c2d4\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"7cc5ce039b11c42b2bc528783d918c3d59b9386847b24ce5501e956f21d1f167\"" Jan 23 00:10:54.265335 containerd[1893]: time="2026-01-23T00:10:54.265224093Z" level=info msg="StartContainer for \"7cc5ce039b11c42b2bc528783d918c3d59b9386847b24ce5501e956f21d1f167\"" Jan 23 00:10:54.265918 containerd[1893]: time="2026-01-23T00:10:54.265884082Z" level=info msg="connecting to shim 7cc5ce039b11c42b2bc528783d918c3d59b9386847b24ce5501e956f21d1f167" address="unix:///run/containerd/s/ec5a28bd3790d7b0d1e991ec1252b65b827730cfa9ec663ba1538a69fc430c1b" protocol=ttrpc version=3 Jan 23 00:10:54.283425 systemd[1]: Started cri-containerd-7cc5ce039b11c42b2bc528783d918c3d59b9386847b24ce5501e956f21d1f167.scope - libcontainer container 7cc5ce039b11c42b2bc528783d918c3d59b9386847b24ce5501e956f21d1f167. Jan 23 00:10:54.305370 systemd[1]: cri-containerd-7cc5ce039b11c42b2bc528783d918c3d59b9386847b24ce5501e956f21d1f167.scope: Deactivated successfully. Jan 23 00:10:54.310903 containerd[1893]: time="2026-01-23T00:10:54.310841179Z" level=info msg="received container exit event container_id:\"7cc5ce039b11c42b2bc528783d918c3d59b9386847b24ce5501e956f21d1f167\" id:\"7cc5ce039b11c42b2bc528783d918c3d59b9386847b24ce5501e956f21d1f167\" pid:3702 exited_at:{seconds:1769127054 nanos:308077211}" Jan 23 00:10:54.311935 containerd[1893]: time="2026-01-23T00:10:54.311908021Z" level=info msg="StartContainer for \"7cc5ce039b11c42b2bc528783d918c3d59b9386847b24ce5501e956f21d1f167\" returns successfully" Jan 23 00:10:54.876228 containerd[1893]: time="2026-01-23T00:10:54.875971095Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 23 00:10:54.899743 kubelet[3369]: I0123 00:10:54.899489 3369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-clcnl" podStartSLOduration=4.89947278 podStartE2EDuration="4.89947278s" podCreationTimestamp="2026-01-23 00:10:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:10:52.878770676 +0000 UTC m=+7.210011290" watchObservedRunningTime="2026-01-23 00:10:54.89947278 +0000 UTC m=+9.230713394" Jan 23 00:10:55.055444 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7cc5ce039b11c42b2bc528783d918c3d59b9386847b24ce5501e956f21d1f167-rootfs.mount: Deactivated successfully. Jan 23 00:10:56.782526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3092424035.mount: Deactivated successfully. Jan 23 00:10:57.697405 containerd[1893]: time="2026-01-23T00:10:57.697345418Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:10:57.699610 containerd[1893]: time="2026-01-23T00:10:57.699424317Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" Jan 23 00:10:57.701799 containerd[1893]: time="2026-01-23T00:10:57.701773704Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:10:57.707911 containerd[1893]: time="2026-01-23T00:10:57.707875914Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:10:57.708640 containerd[1893]: time="2026-01-23T00:10:57.708442628Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 2.832427108s" Jan 23 00:10:57.708640 containerd[1893]: time="2026-01-23T00:10:57.708471749Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Jan 23 00:10:57.711651 containerd[1893]: time="2026-01-23T00:10:57.711610225Z" level=info msg="CreateContainer within sandbox \"7ece94518d4d877bb386395666a3df1f2fc8d8e491eb57ae385eed1999f6c2d4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 00:10:57.727003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3406373851.mount: Deactivated successfully. Jan 23 00:10:57.727811 containerd[1893]: time="2026-01-23T00:10:57.727764500Z" level=info msg="Container bae1fb2bae7f5fa19fb40c35c6315cfc17a5c98d83aba74e9939e7455a84675e: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:10:57.740608 containerd[1893]: time="2026-01-23T00:10:57.740569165Z" level=info msg="CreateContainer within sandbox \"7ece94518d4d877bb386395666a3df1f2fc8d8e491eb57ae385eed1999f6c2d4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"bae1fb2bae7f5fa19fb40c35c6315cfc17a5c98d83aba74e9939e7455a84675e\"" Jan 23 00:10:57.741322 containerd[1893]: time="2026-01-23T00:10:57.741190929Z" level=info msg="StartContainer for \"bae1fb2bae7f5fa19fb40c35c6315cfc17a5c98d83aba74e9939e7455a84675e\"" Jan 23 00:10:57.743258 containerd[1893]: time="2026-01-23T00:10:57.743230442Z" level=info msg="connecting to shim bae1fb2bae7f5fa19fb40c35c6315cfc17a5c98d83aba74e9939e7455a84675e" address="unix:///run/containerd/s/ec5a28bd3790d7b0d1e991ec1252b65b827730cfa9ec663ba1538a69fc430c1b" protocol=ttrpc version=3 Jan 23 00:10:57.759382 systemd[1]: Started cri-containerd-bae1fb2bae7f5fa19fb40c35c6315cfc17a5c98d83aba74e9939e7455a84675e.scope - libcontainer container bae1fb2bae7f5fa19fb40c35c6315cfc17a5c98d83aba74e9939e7455a84675e. Jan 23 00:10:57.779087 systemd[1]: cri-containerd-bae1fb2bae7f5fa19fb40c35c6315cfc17a5c98d83aba74e9939e7455a84675e.scope: Deactivated successfully. Jan 23 00:10:57.782850 containerd[1893]: time="2026-01-23T00:10:57.782818048Z" level=info msg="received container exit event container_id:\"bae1fb2bae7f5fa19fb40c35c6315cfc17a5c98d83aba74e9939e7455a84675e\" id:\"bae1fb2bae7f5fa19fb40c35c6315cfc17a5c98d83aba74e9939e7455a84675e\" pid:3777 exited_at:{seconds:1769127057 nanos:780390883}" Jan 23 00:10:57.783857 containerd[1893]: time="2026-01-23T00:10:57.783835936Z" level=info msg="StartContainer for \"bae1fb2bae7f5fa19fb40c35c6315cfc17a5c98d83aba74e9939e7455a84675e\" returns successfully" Jan 23 00:10:57.799964 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bae1fb2bae7f5fa19fb40c35c6315cfc17a5c98d83aba74e9939e7455a84675e-rootfs.mount: Deactivated successfully. Jan 23 00:10:57.833583 kubelet[3369]: I0123 00:10:57.833557 3369 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 00:10:57.877647 kubelet[3369]: I0123 00:10:57.877609 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45n85\" (UniqueName: \"kubernetes.io/projected/27b3f137-2d87-4c6b-8987-9c21cfd89f02-kube-api-access-45n85\") pod \"coredns-668d6bf9bc-txhwz\" (UID: \"27b3f137-2d87-4c6b-8987-9c21cfd89f02\") " pod="kube-system/coredns-668d6bf9bc-txhwz" Jan 23 00:10:57.877647 kubelet[3369]: I0123 00:10:57.877651 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27b3f137-2d87-4c6b-8987-9c21cfd89f02-config-volume\") pod \"coredns-668d6bf9bc-txhwz\" (UID: \"27b3f137-2d87-4c6b-8987-9c21cfd89f02\") " pod="kube-system/coredns-668d6bf9bc-txhwz" Jan 23 00:10:57.877996 systemd[1]: Created slice kubepods-burstable-pod27b3f137_2d87_4c6b_8987_9c21cfd89f02.slice - libcontainer container kubepods-burstable-pod27b3f137_2d87_4c6b_8987_9c21cfd89f02.slice. Jan 23 00:10:57.883896 systemd[1]: Created slice kubepods-burstable-pod7f665c56_46e8_4ec5_a337_8fcb814a7734.slice - libcontainer container kubepods-burstable-pod7f665c56_46e8_4ec5_a337_8fcb814a7734.slice. Jan 23 00:10:57.977955 kubelet[3369]: I0123 00:10:57.977826 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f665c56-46e8-4ec5-a337-8fcb814a7734-config-volume\") pod \"coredns-668d6bf9bc-zjxxl\" (UID: \"7f665c56-46e8-4ec5-a337-8fcb814a7734\") " pod="kube-system/coredns-668d6bf9bc-zjxxl" Jan 23 00:10:57.977955 kubelet[3369]: I0123 00:10:57.977885 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2w4t\" (UniqueName: \"kubernetes.io/projected/7f665c56-46e8-4ec5-a337-8fcb814a7734-kube-api-access-f2w4t\") pod \"coredns-668d6bf9bc-zjxxl\" (UID: \"7f665c56-46e8-4ec5-a337-8fcb814a7734\") " pod="kube-system/coredns-668d6bf9bc-zjxxl" Jan 23 00:10:58.181956 containerd[1893]: time="2026-01-23T00:10:58.181912557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-txhwz,Uid:27b3f137-2d87-4c6b-8987-9c21cfd89f02,Namespace:kube-system,Attempt:0,}" Jan 23 00:10:58.233933 containerd[1893]: time="2026-01-23T00:10:58.233668631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zjxxl,Uid:7f665c56-46e8-4ec5-a337-8fcb814a7734,Namespace:kube-system,Attempt:0,}" Jan 23 00:10:58.322660 containerd[1893]: time="2026-01-23T00:10:58.322578298Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-txhwz,Uid:27b3f137-2d87-4c6b-8987-9c21cfd89f02,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"88ff06d9bdeb04cfc2438cd4b5318fd7450cb3b48f1ee8ab5b14fdc8b3e5f6f4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 23 00:10:58.323075 kubelet[3369]: E0123 00:10:58.323019 3369 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88ff06d9bdeb04cfc2438cd4b5318fd7450cb3b48f1ee8ab5b14fdc8b3e5f6f4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 23 00:10:58.323137 kubelet[3369]: E0123 00:10:58.323097 3369 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88ff06d9bdeb04cfc2438cd4b5318fd7450cb3b48f1ee8ab5b14fdc8b3e5f6f4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-txhwz" Jan 23 00:10:58.323169 kubelet[3369]: E0123 00:10:58.323137 3369 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88ff06d9bdeb04cfc2438cd4b5318fd7450cb3b48f1ee8ab5b14fdc8b3e5f6f4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-txhwz" Jan 23 00:10:58.323207 kubelet[3369]: E0123 00:10:58.323176 3369 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-txhwz_kube-system(27b3f137-2d87-4c6b-8987-9c21cfd89f02)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-txhwz_kube-system(27b3f137-2d87-4c6b-8987-9c21cfd89f02)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"88ff06d9bdeb04cfc2438cd4b5318fd7450cb3b48f1ee8ab5b14fdc8b3e5f6f4\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-txhwz" podUID="27b3f137-2d87-4c6b-8987-9c21cfd89f02" Jan 23 00:10:58.324891 containerd[1893]: time="2026-01-23T00:10:58.324851418Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zjxxl,Uid:7f665c56-46e8-4ec5-a337-8fcb814a7734,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c06e97857eb5763549ab8993fc4ae3936fb9bfcdaeb18f21b59207c2c1166ea\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 23 00:10:58.325115 kubelet[3369]: E0123 00:10:58.324999 3369 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c06e97857eb5763549ab8993fc4ae3936fb9bfcdaeb18f21b59207c2c1166ea\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 23 00:10:58.325115 kubelet[3369]: E0123 00:10:58.325035 3369 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c06e97857eb5763549ab8993fc4ae3936fb9bfcdaeb18f21b59207c2c1166ea\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-zjxxl" Jan 23 00:10:58.325115 kubelet[3369]: E0123 00:10:58.325050 3369 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c06e97857eb5763549ab8993fc4ae3936fb9bfcdaeb18f21b59207c2c1166ea\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-zjxxl" Jan 23 00:10:58.325270 kubelet[3369]: E0123 00:10:58.325227 3369 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-zjxxl_kube-system(7f665c56-46e8-4ec5-a337-8fcb814a7734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-zjxxl_kube-system(7f665c56-46e8-4ec5-a337-8fcb814a7734)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c06e97857eb5763549ab8993fc4ae3936fb9bfcdaeb18f21b59207c2c1166ea\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-zjxxl" podUID="7f665c56-46e8-4ec5-a337-8fcb814a7734" Jan 23 00:10:58.745419 systemd[1]: run-netns-cni\x2d0a9f7e0a\x2d9b83\x2dfb3e\x2d2778\x2dbb173abe32a3.mount: Deactivated successfully. Jan 23 00:10:58.894267 containerd[1893]: time="2026-01-23T00:10:58.893781215Z" level=info msg="CreateContainer within sandbox \"7ece94518d4d877bb386395666a3df1f2fc8d8e491eb57ae385eed1999f6c2d4\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 23 00:10:58.910396 containerd[1893]: time="2026-01-23T00:10:58.910363703Z" level=info msg="Container 47220a028863bd952c8412c0988e487b84aec10d417611fe3c91385a68e3a40c: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:10:58.922602 containerd[1893]: time="2026-01-23T00:10:58.922551020Z" level=info msg="CreateContainer within sandbox \"7ece94518d4d877bb386395666a3df1f2fc8d8e491eb57ae385eed1999f6c2d4\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"47220a028863bd952c8412c0988e487b84aec10d417611fe3c91385a68e3a40c\"" Jan 23 00:10:58.923399 containerd[1893]: time="2026-01-23T00:10:58.923371198Z" level=info msg="StartContainer for \"47220a028863bd952c8412c0988e487b84aec10d417611fe3c91385a68e3a40c\"" Jan 23 00:10:58.924472 containerd[1893]: time="2026-01-23T00:10:58.924394335Z" level=info msg="connecting to shim 47220a028863bd952c8412c0988e487b84aec10d417611fe3c91385a68e3a40c" address="unix:///run/containerd/s/ec5a28bd3790d7b0d1e991ec1252b65b827730cfa9ec663ba1538a69fc430c1b" protocol=ttrpc version=3 Jan 23 00:10:58.944401 systemd[1]: Started cri-containerd-47220a028863bd952c8412c0988e487b84aec10d417611fe3c91385a68e3a40c.scope - libcontainer container 47220a028863bd952c8412c0988e487b84aec10d417611fe3c91385a68e3a40c. Jan 23 00:10:58.969328 containerd[1893]: time="2026-01-23T00:10:58.969012341Z" level=info msg="StartContainer for \"47220a028863bd952c8412c0988e487b84aec10d417611fe3c91385a68e3a40c\" returns successfully" Jan 23 00:11:00.096377 systemd-networkd[1486]: flannel.1: Link UP Jan 23 00:11:00.097422 systemd-networkd[1486]: flannel.1: Gained carrier Jan 23 00:11:01.194376 systemd-networkd[1486]: flannel.1: Gained IPv6LL Jan 23 00:11:05.397298 waagent[2121]: 2026-01-23T00:11:05.396576Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 23 00:11:05.405434 waagent[2121]: 2026-01-23T00:11:05.405384Z INFO ExtHandler Jan 23 00:11:05.405545 waagent[2121]: 2026-01-23T00:11:05.405469Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 23 00:11:05.451314 waagent[2121]: 2026-01-23T00:11:05.451240Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 23 00:11:05.510902 waagent[2121]: 2026-01-23T00:11:05.510812Z INFO ExtHandler Downloaded certificate {'thumbprint': '58CECBA5F5631786812853AE2ED20F365DAA5F4B', 'hasPrivateKey': True} Jan 23 00:11:05.511359 waagent[2121]: 2026-01-23T00:11:05.511317Z INFO ExtHandler Fetch goal state completed Jan 23 00:11:05.511686 waagent[2121]: 2026-01-23T00:11:05.511654Z INFO ExtHandler ExtHandler Jan 23 00:11:05.511739 waagent[2121]: 2026-01-23T00:11:05.511719Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: bdaf1b3f-0b26-40e8-968e-50f2c1046787 correlation d44fb598-91c0-49f0-9bd6-8c3ece6bbc3c created: 2026-01-23T00:10:59.283569Z] Jan 23 00:11:05.512029 waagent[2121]: 2026-01-23T00:11:05.511998Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 23 00:11:05.512474 waagent[2121]: 2026-01-23T00:11:05.512444Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 0 ms] Jan 23 00:11:09.809485 containerd[1893]: time="2026-01-23T00:11:09.809140456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-txhwz,Uid:27b3f137-2d87-4c6b-8987-9c21cfd89f02,Namespace:kube-system,Attempt:0,}" Jan 23 00:11:09.820095 systemd-networkd[1486]: cni0: Link UP Jan 23 00:11:09.820103 systemd-networkd[1486]: cni0: Gained carrier Jan 23 00:11:09.822716 systemd-networkd[1486]: cni0: Lost carrier Jan 23 00:11:09.844907 systemd-networkd[1486]: vethd6317e24: Link UP Jan 23 00:11:09.850613 kernel: cni0: port 1(vethd6317e24) entered blocking state Jan 23 00:11:09.850676 kernel: cni0: port 1(vethd6317e24) entered disabled state Jan 23 00:11:09.853262 kernel: vethd6317e24: entered allmulticast mode Jan 23 00:11:09.855959 kernel: vethd6317e24: entered promiscuous mode Jan 23 00:11:09.867896 kernel: cni0: port 1(vethd6317e24) entered blocking state Jan 23 00:11:09.867998 kernel: cni0: port 1(vethd6317e24) entered forwarding state Jan 23 00:11:09.868364 systemd-networkd[1486]: vethd6317e24: Gained carrier Jan 23 00:11:09.872443 systemd-networkd[1486]: cni0: Gained carrier Jan 23 00:11:09.875191 containerd[1893]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000016938), "name":"cbr0", "type":"bridge"} Jan 23 00:11:09.875191 containerd[1893]: delegateAdd: netconf sent to delegate plugin: Jan 23 00:11:09.917692 containerd[1893]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-23T00:11:09.917642883Z" level=info msg="connecting to shim e578ff912cc289d6a0b757fa7796599f77947f33ff197d1eb9557db941f01e26" address="unix:///run/containerd/s/e8c65ec0ab68b0cadfb0da42b6401d81114c7a2af3523bf4ee10440dc1a3f40c" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:11:09.939442 systemd[1]: Started cri-containerd-e578ff912cc289d6a0b757fa7796599f77947f33ff197d1eb9557db941f01e26.scope - libcontainer container e578ff912cc289d6a0b757fa7796599f77947f33ff197d1eb9557db941f01e26. Jan 23 00:11:09.975277 containerd[1893]: time="2026-01-23T00:11:09.974383683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-txhwz,Uid:27b3f137-2d87-4c6b-8987-9c21cfd89f02,Namespace:kube-system,Attempt:0,} returns sandbox id \"e578ff912cc289d6a0b757fa7796599f77947f33ff197d1eb9557db941f01e26\"" Jan 23 00:11:09.980949 containerd[1893]: time="2026-01-23T00:11:09.980875824Z" level=info msg="CreateContainer within sandbox \"e578ff912cc289d6a0b757fa7796599f77947f33ff197d1eb9557db941f01e26\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 00:11:09.999590 containerd[1893]: time="2026-01-23T00:11:09.999412897Z" level=info msg="Container b7f9f27b51c13a5cef19af2b3691eb1c1871e78ddb7fcadd99bbd0ff77839ed7: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:11:10.014659 containerd[1893]: time="2026-01-23T00:11:10.014587536Z" level=info msg="CreateContainer within sandbox \"e578ff912cc289d6a0b757fa7796599f77947f33ff197d1eb9557db941f01e26\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b7f9f27b51c13a5cef19af2b3691eb1c1871e78ddb7fcadd99bbd0ff77839ed7\"" Jan 23 00:11:10.015680 containerd[1893]: time="2026-01-23T00:11:10.015630057Z" level=info msg="StartContainer for \"b7f9f27b51c13a5cef19af2b3691eb1c1871e78ddb7fcadd99bbd0ff77839ed7\"" Jan 23 00:11:10.016639 containerd[1893]: time="2026-01-23T00:11:10.016609160Z" level=info msg="connecting to shim b7f9f27b51c13a5cef19af2b3691eb1c1871e78ddb7fcadd99bbd0ff77839ed7" address="unix:///run/containerd/s/e8c65ec0ab68b0cadfb0da42b6401d81114c7a2af3523bf4ee10440dc1a3f40c" protocol=ttrpc version=3 Jan 23 00:11:10.036454 systemd[1]: Started cri-containerd-b7f9f27b51c13a5cef19af2b3691eb1c1871e78ddb7fcadd99bbd0ff77839ed7.scope - libcontainer container b7f9f27b51c13a5cef19af2b3691eb1c1871e78ddb7fcadd99bbd0ff77839ed7. Jan 23 00:11:10.067698 containerd[1893]: time="2026-01-23T00:11:10.067365259Z" level=info msg="StartContainer for \"b7f9f27b51c13a5cef19af2b3691eb1c1871e78ddb7fcadd99bbd0ff77839ed7\" returns successfully" Jan 23 00:11:10.927408 kubelet[3369]: I0123 00:11:10.927342 3369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-62zzp" podStartSLOduration=15.0019472 podStartE2EDuration="20.927325014s" podCreationTimestamp="2026-01-23 00:10:50 +0000 UTC" firstStartedPulling="2026-01-23 00:10:51.783990948 +0000 UTC m=+6.115231570" lastFinishedPulling="2026-01-23 00:10:57.70936877 +0000 UTC m=+12.040609384" observedRunningTime="2026-01-23 00:10:59.906928519 +0000 UTC m=+14.238169213" watchObservedRunningTime="2026-01-23 00:11:10.927325014 +0000 UTC m=+25.258565628" Jan 23 00:11:10.944807 kubelet[3369]: I0123 00:11:10.944157 3369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-txhwz" podStartSLOduration=19.944137409 podStartE2EDuration="19.944137409s" podCreationTimestamp="2026-01-23 00:10:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:11:10.929133575 +0000 UTC m=+25.260374189" watchObservedRunningTime="2026-01-23 00:11:10.944137409 +0000 UTC m=+25.275378031" Jan 23 00:11:11.178488 systemd-networkd[1486]: vethd6317e24: Gained IPv6LL Jan 23 00:11:11.562468 systemd-networkd[1486]: cni0: Gained IPv6LL Jan 23 00:11:11.574229 waagent[2121]: 2026-01-23T00:11:11.574183Z INFO ExtHandler Jan 23 00:11:11.574553 waagent[2121]: 2026-01-23T00:11:11.574346Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: b52985ec-6b1e-4822-ab1e-96fc09fc0058 eTag: 12633996025155396362 source: Fabric] Jan 23 00:11:11.574832 waagent[2121]: 2026-01-23T00:11:11.574788Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 23 00:11:12.808915 containerd[1893]: time="2026-01-23T00:11:12.808865941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zjxxl,Uid:7f665c56-46e8-4ec5-a337-8fcb814a7734,Namespace:kube-system,Attempt:0,}" Jan 23 00:11:12.829830 systemd-networkd[1486]: veth6d8074f6: Link UP Jan 23 00:11:12.836877 kernel: cni0: port 2(veth6d8074f6) entered blocking state Jan 23 00:11:12.836970 kernel: cni0: port 2(veth6d8074f6) entered disabled state Jan 23 00:11:12.839687 kernel: veth6d8074f6: entered allmulticast mode Jan 23 00:11:12.842833 kernel: veth6d8074f6: entered promiscuous mode Jan 23 00:11:12.853387 kernel: cni0: port 2(veth6d8074f6) entered blocking state Jan 23 00:11:12.853457 kernel: cni0: port 2(veth6d8074f6) entered forwarding state Jan 23 00:11:12.854727 systemd-networkd[1486]: veth6d8074f6: Gained carrier Jan 23 00:11:12.856649 containerd[1893]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000948e8), "name":"cbr0", "type":"bridge"} Jan 23 00:11:12.856649 containerd[1893]: delegateAdd: netconf sent to delegate plugin: Jan 23 00:11:12.892531 containerd[1893]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-23T00:11:12.892490478Z" level=info msg="connecting to shim 045a9bf23d65d89bc4cb5ca7cca98097352cda6dcef570e4b0d4e4dc220b12cf" address="unix:///run/containerd/s/e5659c15e1c566ea7d3a26baf052ed18894b022e59c4dffff30c9d045072fe6c" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:11:12.914287 systemd[1]: Started cri-containerd-045a9bf23d65d89bc4cb5ca7cca98097352cda6dcef570e4b0d4e4dc220b12cf.scope - libcontainer container 045a9bf23d65d89bc4cb5ca7cca98097352cda6dcef570e4b0d4e4dc220b12cf. Jan 23 00:11:12.947010 containerd[1893]: time="2026-01-23T00:11:12.946968854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zjxxl,Uid:7f665c56-46e8-4ec5-a337-8fcb814a7734,Namespace:kube-system,Attempt:0,} returns sandbox id \"045a9bf23d65d89bc4cb5ca7cca98097352cda6dcef570e4b0d4e4dc220b12cf\"" Jan 23 00:11:12.949776 containerd[1893]: time="2026-01-23T00:11:12.949738406Z" level=info msg="CreateContainer within sandbox \"045a9bf23d65d89bc4cb5ca7cca98097352cda6dcef570e4b0d4e4dc220b12cf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 00:11:12.972477 containerd[1893]: time="2026-01-23T00:11:12.972188611Z" level=info msg="Container 39a42806d6778849d3a8821e81c09dc93369b683229312323a633b103c03c0bc: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:11:12.984556 containerd[1893]: time="2026-01-23T00:11:12.984509960Z" level=info msg="CreateContainer within sandbox \"045a9bf23d65d89bc4cb5ca7cca98097352cda6dcef570e4b0d4e4dc220b12cf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"39a42806d6778849d3a8821e81c09dc93369b683229312323a633b103c03c0bc\"" Jan 23 00:11:12.985279 containerd[1893]: time="2026-01-23T00:11:12.985121659Z" level=info msg="StartContainer for \"39a42806d6778849d3a8821e81c09dc93369b683229312323a633b103c03c0bc\"" Jan 23 00:11:12.986895 containerd[1893]: time="2026-01-23T00:11:12.986845602Z" level=info msg="connecting to shim 39a42806d6778849d3a8821e81c09dc93369b683229312323a633b103c03c0bc" address="unix:///run/containerd/s/e5659c15e1c566ea7d3a26baf052ed18894b022e59c4dffff30c9d045072fe6c" protocol=ttrpc version=3 Jan 23 00:11:13.003426 systemd[1]: Started cri-containerd-39a42806d6778849d3a8821e81c09dc93369b683229312323a633b103c03c0bc.scope - libcontainer container 39a42806d6778849d3a8821e81c09dc93369b683229312323a633b103c03c0bc. Jan 23 00:11:13.028890 containerd[1893]: time="2026-01-23T00:11:13.028847528Z" level=info msg="StartContainer for \"39a42806d6778849d3a8821e81c09dc93369b683229312323a633b103c03c0bc\" returns successfully" Jan 23 00:11:13.956327 kubelet[3369]: I0123 00:11:13.956223 3369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zjxxl" podStartSLOduration=22.956203796 podStartE2EDuration="22.956203796s" podCreationTimestamp="2026-01-23 00:10:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:11:13.939815718 +0000 UTC m=+28.271056340" watchObservedRunningTime="2026-01-23 00:11:13.956203796 +0000 UTC m=+28.287444426" Jan 23 00:11:14.122482 systemd-networkd[1486]: veth6d8074f6: Gained IPv6LL Jan 23 00:12:19.722405 systemd[1]: Started sshd@5-10.200.20.38:22-10.200.16.10:38048.service - OpenSSH per-connection server daemon (10.200.16.10:38048). Jan 23 00:12:20.213713 sshd[4534]: Accepted publickey for core from 10.200.16.10 port 38048 ssh2: RSA SHA256:kRQEAzNVhqU4Fmpx84sKU93gp2nZjfuJ8Tlyw3EYXBc Jan 23 00:12:20.214662 sshd-session[4534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:12:20.218178 systemd-logind[1875]: New session 8 of user core. Jan 23 00:12:20.228611 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 00:12:20.642135 sshd[4537]: Connection closed by 10.200.16.10 port 38048 Jan 23 00:12:20.642734 sshd-session[4534]: pam_unix(sshd:session): session closed for user core Jan 23 00:12:20.646849 systemd[1]: sshd@5-10.200.20.38:22-10.200.16.10:38048.service: Deactivated successfully. Jan 23 00:12:20.649594 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 00:12:20.650655 systemd-logind[1875]: Session 8 logged out. Waiting for processes to exit. Jan 23 00:12:20.652155 systemd-logind[1875]: Removed session 8. Jan 23 00:12:25.717288 systemd[1]: Started sshd@6-10.200.20.38:22-10.200.16.10:38050.service - OpenSSH per-connection server daemon (10.200.16.10:38050). Jan 23 00:12:26.168383 sshd[4595]: Accepted publickey for core from 10.200.16.10 port 38050 ssh2: RSA SHA256:kRQEAzNVhqU4Fmpx84sKU93gp2nZjfuJ8Tlyw3EYXBc Jan 23 00:12:26.169487 sshd-session[4595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:12:26.173035 systemd-logind[1875]: New session 9 of user core. Jan 23 00:12:26.177385 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 00:12:26.530023 sshd[4598]: Connection closed by 10.200.16.10 port 38050 Jan 23 00:12:26.529919 sshd-session[4595]: pam_unix(sshd:session): session closed for user core Jan 23 00:12:26.534103 systemd-logind[1875]: Session 9 logged out. Waiting for processes to exit. Jan 23 00:12:26.534242 systemd[1]: sshd@6-10.200.20.38:22-10.200.16.10:38050.service: Deactivated successfully. Jan 23 00:12:26.535890 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 00:12:26.537650 systemd-logind[1875]: Removed session 9. Jan 23 00:12:31.629449 systemd[1]: Started sshd@7-10.200.20.38:22-10.200.16.10:38792.service - OpenSSH per-connection server daemon (10.200.16.10:38792). Jan 23 00:12:32.122795 sshd[4631]: Accepted publickey for core from 10.200.16.10 port 38792 ssh2: RSA SHA256:kRQEAzNVhqU4Fmpx84sKU93gp2nZjfuJ8Tlyw3EYXBc Jan 23 00:12:32.123597 sshd-session[4631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:12:32.127377 systemd-logind[1875]: New session 10 of user core. Jan 23 00:12:32.132402 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 00:12:32.512125 sshd[4634]: Connection closed by 10.200.16.10 port 38792 Jan 23 00:12:32.512950 sshd-session[4631]: pam_unix(sshd:session): session closed for user core Jan 23 00:12:32.516610 systemd[1]: sshd@7-10.200.20.38:22-10.200.16.10:38792.service: Deactivated successfully. Jan 23 00:12:32.518392 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 00:12:32.519378 systemd-logind[1875]: Session 10 logged out. Waiting for processes to exit. Jan 23 00:12:32.521138 systemd-logind[1875]: Removed session 10. Jan 23 00:12:32.602580 systemd[1]: Started sshd@8-10.200.20.38:22-10.200.16.10:38796.service - OpenSSH per-connection server daemon (10.200.16.10:38796). Jan 23 00:12:33.096806 sshd[4647]: Accepted publickey for core from 10.200.16.10 port 38796 ssh2: RSA SHA256:kRQEAzNVhqU4Fmpx84sKU93gp2nZjfuJ8Tlyw3EYXBc Jan 23 00:12:33.097868 sshd-session[4647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:12:33.101479 systemd-logind[1875]: New session 11 of user core. Jan 23 00:12:33.107492 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 00:12:33.513702 sshd[4650]: Connection closed by 10.200.16.10 port 38796 Jan 23 00:12:33.514309 sshd-session[4647]: pam_unix(sshd:session): session closed for user core Jan 23 00:12:33.518305 systemd[1]: sshd@8-10.200.20.38:22-10.200.16.10:38796.service: Deactivated successfully. Jan 23 00:12:33.520106 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 00:12:33.520877 systemd-logind[1875]: Session 11 logged out. Waiting for processes to exit. Jan 23 00:12:33.522404 systemd-logind[1875]: Removed session 11. Jan 23 00:12:33.601561 systemd[1]: Started sshd@9-10.200.20.38:22-10.200.16.10:38806.service - OpenSSH per-connection server daemon (10.200.16.10:38806). Jan 23 00:12:34.094344 sshd[4660]: Accepted publickey for core from 10.200.16.10 port 38806 ssh2: RSA SHA256:kRQEAzNVhqU4Fmpx84sKU93gp2nZjfuJ8Tlyw3EYXBc Jan 23 00:12:34.095460 sshd-session[4660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:12:34.098984 systemd-logind[1875]: New session 12 of user core. Jan 23 00:12:34.104517 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 00:12:34.484371 sshd[4663]: Connection closed by 10.200.16.10 port 38806 Jan 23 00:12:34.485007 sshd-session[4660]: pam_unix(sshd:session): session closed for user core Jan 23 00:12:34.489044 systemd[1]: sshd@9-10.200.20.38:22-10.200.16.10:38806.service: Deactivated successfully. Jan 23 00:12:34.490764 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 00:12:34.491837 systemd-logind[1875]: Session 12 logged out. Waiting for processes to exit. Jan 23 00:12:34.493520 systemd-logind[1875]: Removed session 12. Jan 23 00:12:39.575884 systemd[1]: Started sshd@10-10.200.20.38:22-10.200.16.10:47910.service - OpenSSH per-connection server daemon (10.200.16.10:47910). Jan 23 00:12:40.069187 sshd[4696]: Accepted publickey for core from 10.200.16.10 port 47910 ssh2: RSA SHA256:kRQEAzNVhqU4Fmpx84sKU93gp2nZjfuJ8Tlyw3EYXBc Jan 23 00:12:40.071094 sshd-session[4696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:12:40.074887 systemd-logind[1875]: New session 13 of user core. Jan 23 00:12:40.081637 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 00:12:40.464439 sshd[4699]: Connection closed by 10.200.16.10 port 47910 Jan 23 00:12:40.465288 sshd-session[4696]: pam_unix(sshd:session): session closed for user core Jan 23 00:12:40.469396 systemd[1]: sshd@10-10.200.20.38:22-10.200.16.10:47910.service: Deactivated successfully. Jan 23 00:12:40.471209 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 00:12:40.472192 systemd-logind[1875]: Session 13 logged out. Waiting for processes to exit. Jan 23 00:12:40.473548 systemd-logind[1875]: Removed session 13. Jan 23 00:12:40.552748 systemd[1]: Started sshd@11-10.200.20.38:22-10.200.16.10:47924.service - OpenSSH per-connection server daemon (10.200.16.10:47924). Jan 23 00:12:41.042854 sshd[4732]: Accepted publickey for core from 10.200.16.10 port 47924 ssh2: RSA SHA256:kRQEAzNVhqU4Fmpx84sKU93gp2nZjfuJ8Tlyw3EYXBc Jan 23 00:12:41.044184 sshd-session[4732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:12:41.048196 systemd-logind[1875]: New session 14 of user core. Jan 23 00:12:41.052413 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 00:12:41.483360 sshd[4736]: Connection closed by 10.200.16.10 port 47924 Jan 23 00:12:41.482386 sshd-session[4732]: pam_unix(sshd:session): session closed for user core Jan 23 00:12:41.486176 systemd[1]: sshd@11-10.200.20.38:22-10.200.16.10:47924.service: Deactivated successfully. Jan 23 00:12:41.489749 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 00:12:41.491397 systemd-logind[1875]: Session 14 logged out. Waiting for processes to exit. Jan 23 00:12:41.492891 systemd-logind[1875]: Removed session 14. Jan 23 00:12:41.575551 systemd[1]: Started sshd@12-10.200.20.38:22-10.200.16.10:47930.service - OpenSSH per-connection server daemon (10.200.16.10:47930). Jan 23 00:12:42.067296 sshd[4746]: Accepted publickey for core from 10.200.16.10 port 47930 ssh2: RSA SHA256:kRQEAzNVhqU4Fmpx84sKU93gp2nZjfuJ8Tlyw3EYXBc Jan 23 00:12:42.068485 sshd-session[4746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:12:42.072408 systemd-logind[1875]: New session 15 of user core. Jan 23 00:12:42.080421 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 00:12:42.899627 sshd[4749]: Connection closed by 10.200.16.10 port 47930 Jan 23 00:12:42.900204 sshd-session[4746]: pam_unix(sshd:session): session closed for user core Jan 23 00:12:42.908879 systemd[1]: sshd@12-10.200.20.38:22-10.200.16.10:47930.service: Deactivated successfully. Jan 23 00:12:42.912787 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 00:12:42.914245 systemd-logind[1875]: Session 15 logged out. Waiting for processes to exit. Jan 23 00:12:42.917766 systemd-logind[1875]: Removed session 15. Jan 23 00:12:42.993104 systemd[1]: Started sshd@13-10.200.20.38:22-10.200.16.10:47942.service - OpenSSH per-connection server daemon (10.200.16.10:47942). Jan 23 00:12:43.484112 sshd[4766]: Accepted publickey for core from 10.200.16.10 port 47942 ssh2: RSA SHA256:kRQEAzNVhqU4Fmpx84sKU93gp2nZjfuJ8Tlyw3EYXBc Jan 23 00:12:43.485225 sshd-session[4766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:12:43.489080 systemd-logind[1875]: New session 16 of user core. Jan 23 00:12:43.496409 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 00:12:43.962049 sshd[4769]: Connection closed by 10.200.16.10 port 47942 Jan 23 00:12:43.961808 sshd-session[4766]: pam_unix(sshd:session): session closed for user core Jan 23 00:12:43.966053 systemd[1]: sshd@13-10.200.20.38:22-10.200.16.10:47942.service: Deactivated successfully. Jan 23 00:12:43.968012 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 00:12:43.969102 systemd-logind[1875]: Session 16 logged out. Waiting for processes to exit. Jan 23 00:12:43.971162 systemd-logind[1875]: Removed session 16. Jan 23 00:12:44.050156 systemd[1]: Started sshd@14-10.200.20.38:22-10.200.16.10:47956.service - OpenSSH per-connection server daemon (10.200.16.10:47956). Jan 23 00:12:44.504364 sshd[4778]: Accepted publickey for core from 10.200.16.10 port 47956 ssh2: RSA SHA256:kRQEAzNVhqU4Fmpx84sKU93gp2nZjfuJ8Tlyw3EYXBc Jan 23 00:12:44.505604 sshd-session[4778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:12:44.509591 systemd-logind[1875]: New session 17 of user core. Jan 23 00:12:44.514397 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 00:12:44.875848 sshd[4781]: Connection closed by 10.200.16.10 port 47956 Jan 23 00:12:44.875675 sshd-session[4778]: pam_unix(sshd:session): session closed for user core Jan 23 00:12:44.879718 systemd[1]: sshd@14-10.200.20.38:22-10.200.16.10:47956.service: Deactivated successfully. Jan 23 00:12:44.881721 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 00:12:44.884026 systemd-logind[1875]: Session 17 logged out. Waiting for processes to exit. Jan 23 00:12:44.885979 systemd-logind[1875]: Removed session 17. Jan 23 00:12:49.970488 systemd[1]: Started sshd@15-10.200.20.38:22-10.200.16.10:38800.service - OpenSSH per-connection server daemon (10.200.16.10:38800). Jan 23 00:12:50.458640 sshd[4818]: Accepted publickey for core from 10.200.16.10 port 38800 ssh2: RSA SHA256:kRQEAzNVhqU4Fmpx84sKU93gp2nZjfuJ8Tlyw3EYXBc Jan 23 00:12:50.459984 sshd-session[4818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:12:50.464980 systemd-logind[1875]: New session 18 of user core. Jan 23 00:12:50.470864 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 00:12:50.852123 sshd[4838]: Connection closed by 10.200.16.10 port 38800 Jan 23 00:12:50.852022 sshd-session[4818]: pam_unix(sshd:session): session closed for user core Jan 23 00:12:50.855677 systemd-logind[1875]: Session 18 logged out. Waiting for processes to exit. Jan 23 00:12:50.856215 systemd[1]: sshd@15-10.200.20.38:22-10.200.16.10:38800.service: Deactivated successfully. Jan 23 00:12:50.858171 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 00:12:50.860232 systemd-logind[1875]: Removed session 18. Jan 23 00:12:55.945250 systemd[1]: Started sshd@16-10.200.20.38:22-10.200.16.10:38808.service - OpenSSH per-connection server daemon (10.200.16.10:38808). Jan 23 00:12:56.437783 sshd[4876]: Accepted publickey for core from 10.200.16.10 port 38808 ssh2: RSA SHA256:kRQEAzNVhqU4Fmpx84sKU93gp2nZjfuJ8Tlyw3EYXBc Jan 23 00:12:56.438572 sshd-session[4876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:12:56.442324 systemd-logind[1875]: New session 19 of user core. Jan 23 00:12:56.457425 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 00:12:56.827981 sshd[4879]: Connection closed by 10.200.16.10 port 38808 Jan 23 00:12:56.829433 sshd-session[4876]: pam_unix(sshd:session): session closed for user core Jan 23 00:12:56.832320 systemd[1]: sshd@16-10.200.20.38:22-10.200.16.10:38808.service: Deactivated successfully. Jan 23 00:12:56.835038 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 00:12:56.836117 systemd-logind[1875]: Session 19 logged out. Waiting for processes to exit. Jan 23 00:12:56.838138 systemd-logind[1875]: Removed session 19. Jan 23 00:13:01.922098 systemd[1]: Started sshd@17-10.200.20.38:22-10.200.16.10:43756.service - OpenSSH per-connection server daemon (10.200.16.10:43756). Jan 23 00:13:02.413314 sshd[4911]: Accepted publickey for core from 10.200.16.10 port 43756 ssh2: RSA SHA256:kRQEAzNVhqU4Fmpx84sKU93gp2nZjfuJ8Tlyw3EYXBc Jan 23 00:13:02.414405 sshd-session[4911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:13:02.418063 systemd-logind[1875]: New session 20 of user core. Jan 23 00:13:02.430426 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 00:13:02.800643 sshd[4914]: Connection closed by 10.200.16.10 port 43756 Jan 23 00:13:02.800401 sshd-session[4911]: pam_unix(sshd:session): session closed for user core Jan 23 00:13:02.804513 systemd[1]: sshd@17-10.200.20.38:22-10.200.16.10:43756.service: Deactivated successfully. Jan 23 00:13:02.806392 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 00:13:02.807172 systemd-logind[1875]: Session 20 logged out. Waiting for processes to exit. Jan 23 00:13:02.808444 systemd-logind[1875]: Removed session 20.