Mar 12 02:55:28.101480 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Mar 12 02:55:28.101498 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Wed Mar 11 22:58:42 -00 2026 Mar 12 02:55:28.101504 kernel: KASLR enabled Mar 12 02:55:28.101508 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 12 02:55:28.101512 kernel: printk: legacy bootconsole [pl11] enabled Mar 12 02:55:28.101517 kernel: efi: EFI v2.7 by EDK II Mar 12 02:55:28.101522 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e89d018 RNG=0x3f979998 MEMRESERVE=0x3db83598 Mar 12 02:55:28.101526 kernel: random: crng init done Mar 12 02:55:28.101530 kernel: secureboot: Secure boot disabled Mar 12 02:55:28.101534 kernel: ACPI: Early table checksum verification disabled Mar 12 02:55:28.101538 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Mar 12 02:55:28.101542 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 02:55:28.101546 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 02:55:28.101550 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 12 02:55:28.101556 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 02:55:28.101560 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 02:55:28.101564 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 02:55:28.101568 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 02:55:28.101572 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 02:55:28.101578 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 02:55:28.101582 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 12 02:55:28.101586 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 02:55:28.101590 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 12 02:55:28.101594 kernel: ACPI: Use ACPI SPCR as default console: Yes Mar 12 02:55:28.101598 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Mar 12 02:55:28.101602 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Mar 12 02:55:28.101607 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Mar 12 02:55:28.101611 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Mar 12 02:55:28.101615 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Mar 12 02:55:28.101619 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Mar 12 02:55:28.101624 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Mar 12 02:55:28.101628 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Mar 12 02:55:28.101632 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Mar 12 02:55:28.101636 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Mar 12 02:55:28.101641 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Mar 12 02:55:28.101645 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Mar 12 02:55:28.101649 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Mar 12 02:55:28.101653 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Mar 12 02:55:28.101657 kernel: Zone ranges: Mar 12 02:55:28.101662 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 12 02:55:28.101668 kernel: DMA32 empty Mar 12 02:55:28.101673 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 12 02:55:28.101677 kernel: Device empty Mar 12 02:55:28.101682 kernel: Movable zone start for each node Mar 12 02:55:28.101686 kernel: Early memory node ranges Mar 12 02:55:28.101690 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 12 02:55:28.101696 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Mar 12 02:55:28.101700 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Mar 12 02:55:28.101704 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Mar 12 02:55:28.101709 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Mar 12 02:55:28.101713 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Mar 12 02:55:28.101717 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 12 02:55:28.101722 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 12 02:55:28.101726 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 12 02:55:28.101730 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Mar 12 02:55:28.101735 kernel: psci: probing for conduit method from ACPI. Mar 12 02:55:28.101739 kernel: psci: PSCIv1.3 detected in firmware. Mar 12 02:55:28.101743 kernel: psci: Using standard PSCI v0.2 function IDs Mar 12 02:55:28.101749 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 12 02:55:28.101753 kernel: psci: SMC Calling Convention v1.4 Mar 12 02:55:28.101757 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 12 02:55:28.101762 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 12 02:55:28.101766 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Mar 12 02:55:28.101770 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Mar 12 02:55:28.101775 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 12 02:55:28.101779 kernel: Detected PIPT I-cache on CPU0 Mar 12 02:55:28.101784 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Mar 12 02:55:28.101788 kernel: CPU features: detected: GIC system register CPU interface Mar 12 02:55:28.101793 kernel: CPU features: detected: Spectre-v4 Mar 12 02:55:28.101797 kernel: CPU features: detected: Spectre-BHB Mar 12 02:55:28.101802 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 12 02:55:28.101806 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 12 02:55:28.101811 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Mar 12 02:55:28.101815 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 12 02:55:28.101820 kernel: alternatives: applying boot alternatives Mar 12 02:55:28.101825 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=2acf88d04fc3ef96b26cdc5f6b546a4363b33b9eef9645fad2961c4f57aac66f Mar 12 02:55:28.101829 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 12 02:55:28.101834 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 12 02:55:28.101838 kernel: Fallback order for Node 0: 0 Mar 12 02:55:28.101843 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Mar 12 02:55:28.101848 kernel: Policy zone: Normal Mar 12 02:55:28.101852 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 12 02:55:28.101857 kernel: software IO TLB: area num 2. Mar 12 02:55:28.101861 kernel: software IO TLB: mapped [mem 0x0000000035900000-0x0000000039900000] (64MB) Mar 12 02:55:28.101865 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 12 02:55:28.101870 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 12 02:55:28.101875 kernel: rcu: RCU event tracing is enabled. Mar 12 02:55:28.101879 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 12 02:55:28.101884 kernel: Trampoline variant of Tasks RCU enabled. Mar 12 02:55:28.101888 kernel: Tracing variant of Tasks RCU enabled. Mar 12 02:55:28.101893 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 12 02:55:28.101897 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 12 02:55:28.101903 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 12 02:55:28.101907 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 12 02:55:28.101912 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 12 02:55:28.101916 kernel: GICv3: 960 SPIs implemented Mar 12 02:55:28.101920 kernel: GICv3: 0 Extended SPIs implemented Mar 12 02:55:28.101925 kernel: Root IRQ handler: gic_handle_irq Mar 12 02:55:28.101929 kernel: GICv3: GICv3 features: 16 PPIs, RSS Mar 12 02:55:28.101933 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Mar 12 02:55:28.101938 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 12 02:55:28.101942 kernel: ITS: No ITS available, not enabling LPIs Mar 12 02:55:28.101947 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 12 02:55:28.101952 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Mar 12 02:55:28.101956 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 12 02:55:28.101961 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Mar 12 02:55:28.101965 kernel: Console: colour dummy device 80x25 Mar 12 02:55:28.101970 kernel: printk: legacy console [tty1] enabled Mar 12 02:55:28.101975 kernel: ACPI: Core revision 20240827 Mar 12 02:55:28.101979 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Mar 12 02:55:28.101984 kernel: pid_max: default: 32768 minimum: 301 Mar 12 02:55:28.101988 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 12 02:55:28.101993 kernel: landlock: Up and running. Mar 12 02:55:28.101998 kernel: SELinux: Initializing. Mar 12 02:55:28.102003 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 12 02:55:28.102008 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 12 02:55:28.102012 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Mar 12 02:55:28.102017 kernel: Hyper-V: Host Build 10.0.26102.1212-1-0 Mar 12 02:55:28.102025 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 12 02:55:28.102031 kernel: rcu: Hierarchical SRCU implementation. Mar 12 02:55:28.102035 kernel: rcu: Max phase no-delay instances is 400. Mar 12 02:55:28.102040 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 12 02:55:28.102045 kernel: Remapping and enabling EFI services. Mar 12 02:55:28.102050 kernel: smp: Bringing up secondary CPUs ... Mar 12 02:55:28.102054 kernel: Detected PIPT I-cache on CPU1 Mar 12 02:55:28.102060 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 12 02:55:28.102065 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Mar 12 02:55:28.102069 kernel: smp: Brought up 1 node, 2 CPUs Mar 12 02:55:28.102074 kernel: SMP: Total of 2 processors activated. Mar 12 02:55:28.102079 kernel: CPU: All CPU(s) started at EL1 Mar 12 02:55:28.102084 kernel: CPU features: detected: 32-bit EL0 Support Mar 12 02:55:28.102089 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 12 02:55:28.102094 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 12 02:55:28.102099 kernel: CPU features: detected: Common not Private translations Mar 12 02:55:28.102104 kernel: CPU features: detected: CRC32 instructions Mar 12 02:55:28.102109 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Mar 12 02:55:28.102113 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 12 02:55:28.102118 kernel: CPU features: detected: LSE atomic instructions Mar 12 02:55:28.102123 kernel: CPU features: detected: Privileged Access Never Mar 12 02:55:28.102129 kernel: CPU features: detected: Speculation barrier (SB) Mar 12 02:55:28.102133 kernel: CPU features: detected: TLB range maintenance instructions Mar 12 02:55:28.102138 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 12 02:55:28.102143 kernel: CPU features: detected: Scalable Vector Extension Mar 12 02:55:28.102148 kernel: alternatives: applying system-wide alternatives Mar 12 02:55:28.102152 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Mar 12 02:55:28.102157 kernel: SVE: maximum available vector length 16 bytes per vector Mar 12 02:55:28.102162 kernel: SVE: default vector length 16 bytes per vector Mar 12 02:55:28.104200 kernel: Memory: 3952828K/4194160K available (11200K kernel code, 2458K rwdata, 9088K rodata, 39552K init, 1038K bss, 220144K reserved, 16384K cma-reserved) Mar 12 02:55:28.104218 kernel: devtmpfs: initialized Mar 12 02:55:28.104224 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 12 02:55:28.104229 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 12 02:55:28.104234 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 12 02:55:28.104239 kernel: 0 pages in range for non-PLT usage Mar 12 02:55:28.104244 kernel: 508400 pages in range for PLT usage Mar 12 02:55:28.104249 kernel: pinctrl core: initialized pinctrl subsystem Mar 12 02:55:28.104254 kernel: SMBIOS 3.1.0 present. Mar 12 02:55:28.104259 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Mar 12 02:55:28.104265 kernel: DMI: Memory slots populated: 2/2 Mar 12 02:55:28.104270 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 12 02:55:28.104275 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 12 02:55:28.104279 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 12 02:55:28.104284 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 12 02:55:28.104289 kernel: audit: initializing netlink subsys (disabled) Mar 12 02:55:28.104294 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Mar 12 02:55:28.104299 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 12 02:55:28.104305 kernel: cpuidle: using governor menu Mar 12 02:55:28.104310 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 12 02:55:28.104314 kernel: ASID allocator initialised with 32768 entries Mar 12 02:55:28.104319 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 12 02:55:28.104324 kernel: Serial: AMBA PL011 UART driver Mar 12 02:55:28.104329 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 12 02:55:28.104334 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 12 02:55:28.104339 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 12 02:55:28.104344 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 12 02:55:28.104349 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 12 02:55:28.104354 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 12 02:55:28.104359 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 12 02:55:28.104364 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 12 02:55:28.104368 kernel: ACPI: Added _OSI(Module Device) Mar 12 02:55:28.104373 kernel: ACPI: Added _OSI(Processor Device) Mar 12 02:55:28.104379 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 12 02:55:28.104383 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 12 02:55:28.104388 kernel: ACPI: Interpreter enabled Mar 12 02:55:28.104394 kernel: ACPI: Using GIC for interrupt routing Mar 12 02:55:28.104399 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 12 02:55:28.104404 kernel: printk: legacy console [ttyAMA0] enabled Mar 12 02:55:28.104408 kernel: printk: legacy bootconsole [pl11] disabled Mar 12 02:55:28.104413 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 12 02:55:28.104418 kernel: ACPI: CPU0 has been hot-added Mar 12 02:55:28.104423 kernel: ACPI: CPU1 has been hot-added Mar 12 02:55:28.104428 kernel: iommu: Default domain type: Translated Mar 12 02:55:28.104433 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 12 02:55:28.104438 kernel: efivars: Registered efivars operations Mar 12 02:55:28.104443 kernel: vgaarb: loaded Mar 12 02:55:28.104448 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 12 02:55:28.104453 kernel: VFS: Disk quotas dquot_6.6.0 Mar 12 02:55:28.104458 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 12 02:55:28.104462 kernel: pnp: PnP ACPI init Mar 12 02:55:28.104467 kernel: pnp: PnP ACPI: found 0 devices Mar 12 02:55:28.104472 kernel: NET: Registered PF_INET protocol family Mar 12 02:55:28.104477 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 12 02:55:28.104482 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 12 02:55:28.104488 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 12 02:55:28.104493 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 12 02:55:28.104497 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 12 02:55:28.104502 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 12 02:55:28.104507 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 12 02:55:28.104512 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 12 02:55:28.104517 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 12 02:55:28.104521 kernel: PCI: CLS 0 bytes, default 64 Mar 12 02:55:28.104526 kernel: kvm [1]: HYP mode not available Mar 12 02:55:28.104532 kernel: Initialise system trusted keyrings Mar 12 02:55:28.104537 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 12 02:55:28.104542 kernel: Key type asymmetric registered Mar 12 02:55:28.104546 kernel: Asymmetric key parser 'x509' registered Mar 12 02:55:28.104551 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 12 02:55:28.104556 kernel: io scheduler mq-deadline registered Mar 12 02:55:28.104561 kernel: io scheduler kyber registered Mar 12 02:55:28.104565 kernel: io scheduler bfq registered Mar 12 02:55:28.104570 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 12 02:55:28.104576 kernel: thunder_xcv, ver 1.0 Mar 12 02:55:28.104581 kernel: thunder_bgx, ver 1.0 Mar 12 02:55:28.104585 kernel: nicpf, ver 1.0 Mar 12 02:55:28.104590 kernel: nicvf, ver 1.0 Mar 12 02:55:28.104724 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 12 02:55:28.104775 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-12T02:55:27 UTC (1773284127) Mar 12 02:55:28.104781 kernel: efifb: probing for efifb Mar 12 02:55:28.104788 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 12 02:55:28.104792 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 12 02:55:28.104797 kernel: efifb: scrolling: redraw Mar 12 02:55:28.104802 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 12 02:55:28.104807 kernel: Console: switching to colour frame buffer device 128x48 Mar 12 02:55:28.104812 kernel: fb0: EFI VGA frame buffer device Mar 12 02:55:28.104817 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 12 02:55:28.104821 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 12 02:55:28.104826 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Mar 12 02:55:28.104832 kernel: watchdog: NMI not fully supported Mar 12 02:55:28.104837 kernel: watchdog: Hard watchdog permanently disabled Mar 12 02:55:28.104842 kernel: NET: Registered PF_INET6 protocol family Mar 12 02:55:28.104847 kernel: Segment Routing with IPv6 Mar 12 02:55:28.104851 kernel: In-situ OAM (IOAM) with IPv6 Mar 12 02:55:28.104856 kernel: NET: Registered PF_PACKET protocol family Mar 12 02:55:28.104861 kernel: Key type dns_resolver registered Mar 12 02:55:28.104866 kernel: registered taskstats version 1 Mar 12 02:55:28.104871 kernel: Loading compiled-in X.509 certificates Mar 12 02:55:28.104875 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 5af49ccdcfac64f04a0fbbbc8f2f4ea7a0542b05' Mar 12 02:55:28.104881 kernel: Demotion targets for Node 0: null Mar 12 02:55:28.104886 kernel: Key type .fscrypt registered Mar 12 02:55:28.104891 kernel: Key type fscrypt-provisioning registered Mar 12 02:55:28.104896 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 12 02:55:28.104901 kernel: ima: Allocated hash algorithm: sha1 Mar 12 02:55:28.104905 kernel: ima: No architecture policies found Mar 12 02:55:28.104910 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 12 02:55:28.104915 kernel: clk: Disabling unused clocks Mar 12 02:55:28.104920 kernel: PM: genpd: Disabling unused power domains Mar 12 02:55:28.104925 kernel: Warning: unable to open an initial console. Mar 12 02:55:28.104930 kernel: Freeing unused kernel memory: 39552K Mar 12 02:55:28.104935 kernel: Run /init as init process Mar 12 02:55:28.104940 kernel: with arguments: Mar 12 02:55:28.104945 kernel: /init Mar 12 02:55:28.104949 kernel: with environment: Mar 12 02:55:28.104954 kernel: HOME=/ Mar 12 02:55:28.104959 kernel: TERM=linux Mar 12 02:55:28.104964 systemd[1]: Successfully made /usr/ read-only. Mar 12 02:55:28.104972 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 12 02:55:28.104978 systemd[1]: Detected virtualization microsoft. Mar 12 02:55:28.104983 systemd[1]: Detected architecture arm64. Mar 12 02:55:28.104988 systemd[1]: Running in initrd. Mar 12 02:55:28.104993 systemd[1]: No hostname configured, using default hostname. Mar 12 02:55:28.104998 systemd[1]: Hostname set to . Mar 12 02:55:28.105004 systemd[1]: Initializing machine ID from random generator. Mar 12 02:55:28.105009 systemd[1]: Queued start job for default target initrd.target. Mar 12 02:55:28.105019 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 02:55:28.105025 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 02:55:28.105030 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 12 02:55:28.105036 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 12 02:55:28.105041 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 12 02:55:28.105047 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 12 02:55:28.105054 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 12 02:55:28.105059 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 12 02:55:28.105064 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 02:55:28.105069 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 12 02:55:28.105074 systemd[1]: Reached target paths.target - Path Units. Mar 12 02:55:28.105080 systemd[1]: Reached target slices.target - Slice Units. Mar 12 02:55:28.105085 systemd[1]: Reached target swap.target - Swaps. Mar 12 02:55:28.105090 systemd[1]: Reached target timers.target - Timer Units. Mar 12 02:55:28.105096 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 12 02:55:28.105101 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 12 02:55:28.105107 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 12 02:55:28.105112 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 12 02:55:28.105117 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 12 02:55:28.105122 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 12 02:55:28.105128 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 02:55:28.105133 systemd[1]: Reached target sockets.target - Socket Units. Mar 12 02:55:28.105138 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 12 02:55:28.105144 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 12 02:55:28.105149 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 12 02:55:28.105155 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 12 02:55:28.105161 systemd[1]: Starting systemd-fsck-usr.service... Mar 12 02:55:28.105180 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 12 02:55:28.105186 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 12 02:55:28.105204 systemd-journald[225]: Collecting audit messages is disabled. Mar 12 02:55:28.105219 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 02:55:28.105225 systemd-journald[225]: Journal started Mar 12 02:55:28.105240 systemd-journald[225]: Runtime Journal (/run/log/journal/270823a43ec64241b4031690c40972c2) is 8M, max 78.3M, 70.3M free. Mar 12 02:55:28.110049 systemd-modules-load[227]: Inserted module 'overlay' Mar 12 02:55:28.125104 systemd[1]: Started systemd-journald.service - Journal Service. Mar 12 02:55:28.125725 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 12 02:55:28.149070 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 12 02:55:28.149087 kernel: Bridge firewalling registered Mar 12 02:55:28.148450 systemd-modules-load[227]: Inserted module 'br_netfilter' Mar 12 02:55:28.148613 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 02:55:28.155835 systemd[1]: Finished systemd-fsck-usr.service. Mar 12 02:55:28.160596 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 12 02:55:28.175604 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 02:55:28.186570 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 02:55:28.210711 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 02:55:28.217955 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 12 02:55:28.240295 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 12 02:55:28.253351 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 02:55:28.266030 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 02:55:28.272338 systemd-tmpfiles[256]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 12 02:55:28.274415 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 12 02:55:28.284860 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 02:55:28.300721 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 12 02:55:28.327913 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 12 02:55:28.338698 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 12 02:55:28.357472 dracut-cmdline[261]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=2acf88d04fc3ef96b26cdc5f6b546a4363b33b9eef9645fad2961c4f57aac66f Mar 12 02:55:28.357586 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 02:55:28.413161 systemd-resolved[263]: Positive Trust Anchors: Mar 12 02:55:28.413967 systemd-resolved[263]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 12 02:55:28.413990 systemd-resolved[263]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 12 02:55:28.415723 systemd-resolved[263]: Defaulting to hostname 'linux'. Mar 12 02:55:28.416404 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 12 02:55:28.422505 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 12 02:55:28.514189 kernel: SCSI subsystem initialized Mar 12 02:55:28.521998 kernel: Loading iSCSI transport class v2.0-870. Mar 12 02:55:28.528196 kernel: iscsi: registered transport (tcp) Mar 12 02:55:28.541755 kernel: iscsi: registered transport (qla4xxx) Mar 12 02:55:28.541788 kernel: QLogic iSCSI HBA Driver Mar 12 02:55:28.554737 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 12 02:55:28.573491 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 12 02:55:28.580353 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 12 02:55:28.628205 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 12 02:55:28.634440 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 12 02:55:28.692184 kernel: raid6: neonx8 gen() 18558 MB/s Mar 12 02:55:28.716184 kernel: raid6: neonx4 gen() 18534 MB/s Mar 12 02:55:28.731177 kernel: raid6: neonx2 gen() 17074 MB/s Mar 12 02:55:28.751174 kernel: raid6: neonx1 gen() 15017 MB/s Mar 12 02:55:28.770173 kernel: raid6: int64x8 gen() 10549 MB/s Mar 12 02:55:28.789256 kernel: raid6: int64x4 gen() 10615 MB/s Mar 12 02:55:28.809176 kernel: raid6: int64x2 gen() 8986 MB/s Mar 12 02:55:28.831076 kernel: raid6: int64x1 gen() 7004 MB/s Mar 12 02:55:28.831085 kernel: raid6: using algorithm neonx8 gen() 18558 MB/s Mar 12 02:55:28.853667 kernel: raid6: .... xor() 14906 MB/s, rmw enabled Mar 12 02:55:28.853716 kernel: raid6: using neon recovery algorithm Mar 12 02:55:28.862981 kernel: xor: measuring software checksum speed Mar 12 02:55:28.863035 kernel: 8regs : 28653 MB/sec Mar 12 02:55:28.868515 kernel: 32regs : 27708 MB/sec Mar 12 02:55:28.868522 kernel: arm64_neon : 37638 MB/sec Mar 12 02:55:28.871637 kernel: xor: using function: arm64_neon (37638 MB/sec) Mar 12 02:55:28.910206 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 12 02:55:28.915070 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 12 02:55:28.925657 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 02:55:28.952627 systemd-udevd[474]: Using default interface naming scheme 'v255'. Mar 12 02:55:28.955511 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 02:55:28.970297 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 12 02:55:29.001566 dracut-pre-trigger[494]: rd.md=0: removing MD RAID activation Mar 12 02:55:29.022762 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 12 02:55:29.029163 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 12 02:55:29.076156 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 02:55:29.091529 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 12 02:55:29.148191 kernel: hv_vmbus: Vmbus version:5.3 Mar 12 02:55:29.165013 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 02:55:29.189485 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 12 02:55:29.189504 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 12 02:55:29.189511 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 12 02:55:29.189518 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 12 02:55:29.189525 kernel: hv_vmbus: registering driver hid_hyperv Mar 12 02:55:29.165135 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 02:55:29.228796 kernel: PTP clock support registered Mar 12 02:55:29.228815 kernel: hv_vmbus: registering driver hv_storvsc Mar 12 02:55:29.228823 kernel: scsi host0: storvsc_host_t Mar 12 02:55:29.228942 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 12 02:55:29.228949 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 12 02:55:29.229012 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 12 02:55:29.229087 kernel: scsi host1: storvsc_host_t Mar 12 02:55:29.229144 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 12 02:55:29.229238 kernel: hv_vmbus: registering driver hv_netvsc Mar 12 02:55:29.229247 kernel: hv_utils: Registering HyperV Utility Driver Mar 12 02:55:29.229254 kernel: hv_vmbus: registering driver hv_utils Mar 12 02:55:29.229260 kernel: hv_utils: Heartbeat IC version 3.0 Mar 12 02:55:29.185003 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 02:55:29.366752 kernel: hv_utils: Shutdown IC version 3.2 Mar 12 02:55:29.366777 kernel: hv_utils: TimeSync IC version 4.0 Mar 12 02:55:29.366784 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 12 02:55:29.366928 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 12 02:55:29.366992 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 12 02:55:29.198000 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 02:55:29.379889 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 12 02:55:29.380017 kernel: hv_netvsc 000d3afc-36d7-000d-3afc-36d7000d3afc eth0: VF slot 1 added Mar 12 02:55:29.353832 systemd-resolved[263]: Clock change detected. Flushing caches. Mar 12 02:55:29.392296 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 12 02:55:29.375183 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 12 02:55:29.387543 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 02:55:29.388062 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 02:55:29.399641 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 02:55:29.436463 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 12 02:55:29.436494 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 12 02:55:29.445872 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 12 02:55:29.446486 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 12 02:55:29.446496 kernel: hv_vmbus: registering driver hv_pci Mar 12 02:55:29.446503 kernel: hv_pci 38c64a32-4260-43ed-9441-d7b641b969ef: PCI VMBus probing: Using version 0x10004 Mar 12 02:55:29.446597 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 12 02:55:29.448944 kernel: hv_pci 38c64a32-4260-43ed-9441-d7b641b969ef: PCI host bridge to bus 4260:00 Mar 12 02:55:29.469368 kernel: pci_bus 4260:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 12 02:55:29.469537 kernel: pci_bus 4260:00: No busn resource found for root bus, will use [bus 00-ff] Mar 12 02:55:29.469598 kernel: pci 4260:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Mar 12 02:55:29.486058 kernel: pci 4260:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 12 02:55:29.491855 kernel: pci 4260:00:02.0: enabling Extended Tags Mar 12 02:55:29.499893 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#115 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 12 02:55:29.500037 kernel: pci 4260:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 4260:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Mar 12 02:55:29.509736 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 02:55:29.537609 kernel: pci_bus 4260:00: busn_res: [bus 00-ff] end is updated to 00 Mar 12 02:55:29.537738 kernel: pci 4260:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Mar 12 02:55:29.537844 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#90 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 12 02:55:29.593971 kernel: mlx5_core 4260:00:02.0: enabling device (0000 -> 0002) Mar 12 02:55:29.601833 kernel: mlx5_core 4260:00:02.0: PTM is not supported by PCIe Mar 12 02:55:29.601973 kernel: mlx5_core 4260:00:02.0: firmware version: 16.30.5026 Mar 12 02:55:29.835951 kernel: hv_netvsc 000d3afc-36d7-000d-3afc-36d7000d3afc eth0: VF registering: eth1 Mar 12 02:55:29.836242 kernel: mlx5_core 4260:00:02.0 eth1: joined to eth0 Mar 12 02:55:29.841480 kernel: mlx5_core 4260:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 12 02:55:29.853833 kernel: mlx5_core 4260:00:02.0 enP16992s1: renamed from eth1 Mar 12 02:55:29.958339 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 12 02:55:29.988022 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 12 02:55:30.043787 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 12 02:55:30.054555 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 12 02:55:30.062911 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 12 02:55:30.080701 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 12 02:55:30.096098 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 12 02:55:30.101524 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 12 02:55:30.116766 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 02:55:30.122650 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 12 02:55:30.133959 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 12 02:55:30.154827 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 12 02:55:30.164736 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 12 02:55:31.181558 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 12 02:55:31.181610 disk-uuid[669]: The operation has completed successfully. Mar 12 02:55:31.258871 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 12 02:55:31.262365 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 12 02:55:31.292106 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 12 02:55:31.312966 sh[834]: Success Mar 12 02:55:31.346489 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 12 02:55:31.346540 kernel: device-mapper: uevent: version 1.0.3 Mar 12 02:55:31.351734 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 12 02:55:31.361833 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Mar 12 02:55:31.594258 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 12 02:55:31.601093 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 12 02:55:31.621111 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 12 02:55:31.638836 kernel: BTRFS: device fsid 367033b5-6658-46e0-b104-cd609725a5d6 devid 1 transid 35 /dev/mapper/usr (254:0) scanned by mount (852) Mar 12 02:55:31.643754 kernel: BTRFS info (device dm-0): first mount of filesystem 367033b5-6658-46e0-b104-cd609725a5d6 Mar 12 02:55:31.648223 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 12 02:55:31.909857 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 12 02:55:31.909932 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 12 02:55:31.950157 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 12 02:55:31.954847 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 12 02:55:31.962038 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 12 02:55:31.962709 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 12 02:55:31.983617 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 12 02:55:32.011453 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (875) Mar 12 02:55:32.011476 kernel: BTRFS info (device sda6): first mount of filesystem 46247c0a-a0c4-47ba-b6b0-658854ed6c55 Mar 12 02:55:32.016052 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 12 02:55:32.042400 kernel: BTRFS info (device sda6): turning on async discard Mar 12 02:55:32.042435 kernel: BTRFS info (device sda6): enabling free space tree Mar 12 02:55:32.050898 kernel: BTRFS info (device sda6): last unmount of filesystem 46247c0a-a0c4-47ba-b6b0-658854ed6c55 Mar 12 02:55:32.052686 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 12 02:55:32.063213 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 12 02:55:32.105143 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 12 02:55:32.116350 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 12 02:55:32.142533 systemd-networkd[1021]: lo: Link UP Mar 12 02:55:32.142544 systemd-networkd[1021]: lo: Gained carrier Mar 12 02:55:32.143477 systemd-networkd[1021]: Enumeration completed Mar 12 02:55:32.144990 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 12 02:55:32.149327 systemd-networkd[1021]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 02:55:32.149330 systemd-networkd[1021]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 02:55:32.149879 systemd[1]: Reached target network.target - Network. Mar 12 02:55:32.219834 kernel: mlx5_core 4260:00:02.0 enP16992s1: Link up Mar 12 02:55:32.254959 kernel: hv_netvsc 000d3afc-36d7-000d-3afc-36d7000d3afc eth0: Data path switched to VF: enP16992s1 Mar 12 02:55:32.255471 systemd-networkd[1021]: enP16992s1: Link UP Mar 12 02:55:32.255674 systemd-networkd[1021]: eth0: Link UP Mar 12 02:55:32.255991 systemd-networkd[1021]: eth0: Gained carrier Mar 12 02:55:32.256003 systemd-networkd[1021]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 02:55:32.274190 systemd-networkd[1021]: enP16992s1: Gained carrier Mar 12 02:55:32.291844 systemd-networkd[1021]: eth0: DHCPv4 address 10.200.20.34/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 12 02:55:33.041404 ignition[957]: Ignition 2.22.0 Mar 12 02:55:33.041415 ignition[957]: Stage: fetch-offline Mar 12 02:55:33.044668 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 12 02:55:33.041518 ignition[957]: no configs at "/usr/lib/ignition/base.d" Mar 12 02:55:33.051820 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 12 02:55:33.041525 ignition[957]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 12 02:55:33.041588 ignition[957]: parsed url from cmdline: "" Mar 12 02:55:33.041591 ignition[957]: no config URL provided Mar 12 02:55:33.041594 ignition[957]: reading system config file "/usr/lib/ignition/user.ign" Mar 12 02:55:33.041599 ignition[957]: no config at "/usr/lib/ignition/user.ign" Mar 12 02:55:33.041604 ignition[957]: failed to fetch config: resource requires networking Mar 12 02:55:33.041711 ignition[957]: Ignition finished successfully Mar 12 02:55:33.090957 ignition[1032]: Ignition 2.22.0 Mar 12 02:55:33.090962 ignition[1032]: Stage: fetch Mar 12 02:55:33.091216 ignition[1032]: no configs at "/usr/lib/ignition/base.d" Mar 12 02:55:33.091224 ignition[1032]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 12 02:55:33.091290 ignition[1032]: parsed url from cmdline: "" Mar 12 02:55:33.091293 ignition[1032]: no config URL provided Mar 12 02:55:33.091296 ignition[1032]: reading system config file "/usr/lib/ignition/user.ign" Mar 12 02:55:33.091303 ignition[1032]: no config at "/usr/lib/ignition/user.ign" Mar 12 02:55:33.091318 ignition[1032]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 12 02:55:33.194926 ignition[1032]: GET result: OK Mar 12 02:55:33.195013 ignition[1032]: config has been read from IMDS userdata Mar 12 02:55:33.195044 ignition[1032]: parsing config with SHA512: d15e2500db55dd5c595b6b07264bc24aab9050cc0ddc2c508585bcbd975841a5aa79d8c5437480d70d83c6bf9bd1f32789fc1f8e4cebebaf2ebb32b059bdfa22 Mar 12 02:55:33.198225 unknown[1032]: fetched base config from "system" Mar 12 02:55:33.198525 ignition[1032]: fetch: fetch complete Mar 12 02:55:33.198230 unknown[1032]: fetched base config from "system" Mar 12 02:55:33.198528 ignition[1032]: fetch: fetch passed Mar 12 02:55:33.198233 unknown[1032]: fetched user config from "azure" Mar 12 02:55:33.198573 ignition[1032]: Ignition finished successfully Mar 12 02:55:33.204180 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 12 02:55:33.213066 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 12 02:55:33.250541 ignition[1039]: Ignition 2.22.0 Mar 12 02:55:33.250556 ignition[1039]: Stage: kargs Mar 12 02:55:33.254288 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 12 02:55:33.250717 ignition[1039]: no configs at "/usr/lib/ignition/base.d" Mar 12 02:55:33.261007 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 12 02:55:33.250723 ignition[1039]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 12 02:55:33.251199 ignition[1039]: kargs: kargs passed Mar 12 02:55:33.251233 ignition[1039]: Ignition finished successfully Mar 12 02:55:33.290944 ignition[1045]: Ignition 2.22.0 Mar 12 02:55:33.290957 ignition[1045]: Stage: disks Mar 12 02:55:33.295974 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 12 02:55:33.291107 ignition[1045]: no configs at "/usr/lib/ignition/base.d" Mar 12 02:55:33.300633 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 12 02:55:33.291114 ignition[1045]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 12 02:55:33.308748 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 12 02:55:33.291630 ignition[1045]: disks: disks passed Mar 12 02:55:33.316956 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 12 02:55:33.291665 ignition[1045]: Ignition finished successfully Mar 12 02:55:33.325222 systemd[1]: Reached target sysinit.target - System Initialization. Mar 12 02:55:33.333825 systemd[1]: Reached target basic.target - Basic System. Mar 12 02:55:33.344033 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 12 02:55:33.426037 systemd-fsck[1053]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Mar 12 02:55:33.434722 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 12 02:55:33.443911 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 12 02:55:33.659086 kernel: EXT4-fs (sda9): mounted filesystem ee35d325-c1b4-4946-897e-e080dd3c2049 r/w with ordered data mode. Quota mode: none. Mar 12 02:55:33.659700 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 12 02:55:33.663323 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 12 02:55:33.685824 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 12 02:55:33.698366 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 12 02:55:33.704583 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 12 02:55:33.719587 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 12 02:55:33.721866 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 12 02:55:33.744082 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1067) Mar 12 02:55:33.744107 kernel: BTRFS info (device sda6): first mount of filesystem 46247c0a-a0c4-47ba-b6b0-658854ed6c55 Mar 12 02:55:33.748209 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 12 02:55:33.754022 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 12 02:55:33.761850 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 12 02:55:33.777120 kernel: BTRFS info (device sda6): turning on async discard Mar 12 02:55:33.777135 kernel: BTRFS info (device sda6): enabling free space tree Mar 12 02:55:33.778502 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 12 02:55:33.974095 systemd-networkd[1021]: eth0: Gained IPv6LL Mar 12 02:55:34.303967 coreos-metadata[1069]: Mar 12 02:55:34.303 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 12 02:55:34.311913 coreos-metadata[1069]: Mar 12 02:55:34.311 INFO Fetch successful Mar 12 02:55:34.311913 coreos-metadata[1069]: Mar 12 02:55:34.311 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 12 02:55:34.325440 coreos-metadata[1069]: Mar 12 02:55:34.325 INFO Fetch successful Mar 12 02:55:34.339928 coreos-metadata[1069]: Mar 12 02:55:34.339 INFO wrote hostname ci-4459.2.4-n-4fd21a1aad to /sysroot/etc/hostname Mar 12 02:55:34.347800 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 12 02:55:34.484317 initrd-setup-root[1097]: cut: /sysroot/etc/passwd: No such file or directory Mar 12 02:55:34.520879 initrd-setup-root[1104]: cut: /sysroot/etc/group: No such file or directory Mar 12 02:55:34.528575 initrd-setup-root[1111]: cut: /sysroot/etc/shadow: No such file or directory Mar 12 02:55:34.549875 initrd-setup-root[1118]: cut: /sysroot/etc/gshadow: No such file or directory Mar 12 02:55:35.497484 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 12 02:55:35.503227 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 12 02:55:35.520375 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 12 02:55:35.527798 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 12 02:55:35.540309 kernel: BTRFS info (device sda6): last unmount of filesystem 46247c0a-a0c4-47ba-b6b0-658854ed6c55 Mar 12 02:55:35.560205 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 12 02:55:35.570452 ignition[1186]: INFO : Ignition 2.22.0 Mar 12 02:55:35.570452 ignition[1186]: INFO : Stage: mount Mar 12 02:55:35.576747 ignition[1186]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 02:55:35.576747 ignition[1186]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 12 02:55:35.576747 ignition[1186]: INFO : mount: mount passed Mar 12 02:55:35.576747 ignition[1186]: INFO : Ignition finished successfully Mar 12 02:55:35.575831 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 12 02:55:35.584937 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 12 02:55:35.608659 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 12 02:55:35.632832 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1197) Mar 12 02:55:35.642039 kernel: BTRFS info (device sda6): first mount of filesystem 46247c0a-a0c4-47ba-b6b0-658854ed6c55 Mar 12 02:55:35.642069 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 12 02:55:35.650546 kernel: BTRFS info (device sda6): turning on async discard Mar 12 02:55:35.650563 kernel: BTRFS info (device sda6): enabling free space tree Mar 12 02:55:35.651880 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 12 02:55:35.681280 ignition[1214]: INFO : Ignition 2.22.0 Mar 12 02:55:35.681280 ignition[1214]: INFO : Stage: files Mar 12 02:55:35.687088 ignition[1214]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 02:55:35.687088 ignition[1214]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 12 02:55:35.687088 ignition[1214]: DEBUG : files: compiled without relabeling support, skipping Mar 12 02:55:35.687088 ignition[1214]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 12 02:55:35.687088 ignition[1214]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 12 02:55:35.749417 ignition[1214]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 12 02:55:35.754620 ignition[1214]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 12 02:55:35.754620 ignition[1214]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 12 02:55:35.749736 unknown[1214]: wrote ssh authorized keys file for user: core Mar 12 02:55:35.804308 ignition[1214]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 12 02:55:35.812075 ignition[1214]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 12 02:55:35.834137 ignition[1214]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 12 02:55:36.012188 ignition[1214]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 12 02:55:36.012188 ignition[1214]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 12 02:55:36.012188 ignition[1214]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 12 02:55:36.242448 ignition[1214]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 12 02:55:36.488700 ignition[1214]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 12 02:55:36.488700 ignition[1214]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 12 02:55:36.504587 ignition[1214]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 12 02:55:36.504587 ignition[1214]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 12 02:55:36.504587 ignition[1214]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 12 02:55:36.504587 ignition[1214]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 12 02:55:36.504587 ignition[1214]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 12 02:55:36.504587 ignition[1214]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 12 02:55:36.504587 ignition[1214]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 12 02:55:36.504587 ignition[1214]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 12 02:55:36.504587 ignition[1214]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 12 02:55:36.504587 ignition[1214]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 12 02:55:36.504587 ignition[1214]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 12 02:55:36.504587 ignition[1214]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 12 02:55:36.504587 ignition[1214]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-arm64.raw: attempt #1 Mar 12 02:55:36.896858 ignition[1214]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 12 02:55:37.597228 ignition[1214]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 12 02:55:37.597228 ignition[1214]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 12 02:55:37.636703 ignition[1214]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 12 02:55:37.654195 ignition[1214]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 12 02:55:37.654195 ignition[1214]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 12 02:55:37.671068 ignition[1214]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 12 02:55:37.671068 ignition[1214]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 12 02:55:37.671068 ignition[1214]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 12 02:55:37.671068 ignition[1214]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 12 02:55:37.671068 ignition[1214]: INFO : files: files passed Mar 12 02:55:37.671068 ignition[1214]: INFO : Ignition finished successfully Mar 12 02:55:37.663646 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 12 02:55:37.677145 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 12 02:55:37.713286 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 12 02:55:37.725090 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 12 02:55:37.725167 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 12 02:55:37.758574 initrd-setup-root-after-ignition[1244]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 12 02:55:37.758574 initrd-setup-root-after-ignition[1244]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 12 02:55:37.771440 initrd-setup-root-after-ignition[1248]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 12 02:55:37.766031 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 12 02:55:37.777181 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 12 02:55:37.789520 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 12 02:55:37.836103 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 12 02:55:37.836188 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 12 02:55:37.845364 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 12 02:55:37.854790 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 12 02:55:37.864114 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 12 02:55:37.864691 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 12 02:55:37.900603 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 12 02:55:37.907309 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 12 02:55:37.933703 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 12 02:55:37.939302 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 02:55:37.949010 systemd[1]: Stopped target timers.target - Timer Units. Mar 12 02:55:37.957950 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 12 02:55:37.958036 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 12 02:55:37.970994 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 12 02:55:37.975891 systemd[1]: Stopped target basic.target - Basic System. Mar 12 02:55:37.984714 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 12 02:55:37.993739 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 12 02:55:38.002817 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 12 02:55:38.012463 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 12 02:55:38.022403 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 12 02:55:38.031262 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 12 02:55:38.041547 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 12 02:55:38.050342 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 12 02:55:38.060405 systemd[1]: Stopped target swap.target - Swaps. Mar 12 02:55:38.069161 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 12 02:55:38.069260 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 12 02:55:38.081862 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 12 02:55:38.086927 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 02:55:38.096688 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 12 02:55:38.096746 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 02:55:38.106641 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 12 02:55:38.106719 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 12 02:55:38.120617 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 12 02:55:38.120706 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 12 02:55:38.125734 systemd[1]: ignition-files.service: Deactivated successfully. Mar 12 02:55:38.125801 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 12 02:55:38.201878 ignition[1268]: INFO : Ignition 2.22.0 Mar 12 02:55:38.201878 ignition[1268]: INFO : Stage: umount Mar 12 02:55:38.201878 ignition[1268]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 02:55:38.201878 ignition[1268]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 12 02:55:38.201878 ignition[1268]: INFO : umount: umount passed Mar 12 02:55:38.201878 ignition[1268]: INFO : Ignition finished successfully Mar 12 02:55:38.134263 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 12 02:55:38.134325 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 12 02:55:38.146422 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 12 02:55:38.161525 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 12 02:55:38.161631 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 02:55:38.183084 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 12 02:55:38.191087 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 12 02:55:38.195736 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 02:55:38.206974 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 12 02:55:38.207050 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 12 02:55:38.226165 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 12 02:55:38.226253 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 12 02:55:38.244488 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 12 02:55:38.245664 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 12 02:55:38.245913 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 12 02:55:38.259646 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 12 02:55:38.259691 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 12 02:55:38.269901 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 12 02:55:38.269940 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 12 02:55:38.280334 systemd[1]: Stopped target network.target - Network. Mar 12 02:55:38.288014 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 12 02:55:38.288058 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 12 02:55:38.297354 systemd[1]: Stopped target paths.target - Path Units. Mar 12 02:55:38.307135 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 12 02:55:38.312830 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 02:55:38.322704 systemd[1]: Stopped target slices.target - Slice Units. Mar 12 02:55:38.331284 systemd[1]: Stopped target sockets.target - Socket Units. Mar 12 02:55:38.339269 systemd[1]: iscsid.socket: Deactivated successfully. Mar 12 02:55:38.339318 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 12 02:55:38.348436 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 12 02:55:38.348463 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 12 02:55:38.356784 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 12 02:55:38.356830 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 12 02:55:38.365860 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 12 02:55:38.365894 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 12 02:55:38.374740 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 12 02:55:38.383504 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 12 02:55:38.392152 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 12 02:55:38.392253 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 12 02:55:38.409279 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 12 02:55:38.409398 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 12 02:55:38.425173 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 12 02:55:38.425322 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 12 02:55:38.425398 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 12 02:55:38.439279 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 12 02:55:38.439676 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 12 02:55:38.451386 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 12 02:55:38.637033 kernel: hv_netvsc 000d3afc-36d7-000d-3afc-36d7000d3afc eth0: Data path switched from VF: enP16992s1 Mar 12 02:55:38.451422 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 12 02:55:38.461237 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 12 02:55:38.474935 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 12 02:55:38.474987 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 12 02:55:38.486025 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 12 02:55:38.486068 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 12 02:55:38.495422 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 12 02:55:38.495455 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 12 02:55:38.500616 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 12 02:55:38.500647 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 02:55:38.515663 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 02:55:38.524203 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 12 02:55:38.524246 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 12 02:55:38.558920 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 12 02:55:38.559216 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 02:55:38.569514 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 12 02:55:38.569542 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 12 02:55:38.579113 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 12 02:55:38.579139 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 02:55:38.589616 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 12 02:55:38.589648 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 12 02:55:38.603473 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 12 02:55:38.603507 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 12 02:55:38.613846 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 12 02:55:38.613872 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 02:55:38.637982 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 12 02:55:38.654571 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 12 02:55:38.654628 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 12 02:55:38.666005 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 12 02:55:38.666042 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 02:55:38.678185 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 02:55:38.848360 systemd-journald[225]: Received SIGTERM from PID 1 (systemd). Mar 12 02:55:38.678225 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 02:55:38.688961 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 12 02:55:38.689000 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 12 02:55:38.689022 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 12 02:55:38.689275 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 12 02:55:38.689356 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 12 02:55:38.698301 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 12 02:55:38.698366 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 12 02:55:38.703315 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 12 02:55:38.705440 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 12 02:55:38.714799 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 12 02:55:38.724156 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 12 02:55:38.724235 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 12 02:55:38.734913 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 12 02:55:38.763608 systemd[1]: Switching root. Mar 12 02:55:38.910520 systemd-journald[225]: Journal stopped Mar 12 02:55:43.109798 kernel: SELinux: policy capability network_peer_controls=1 Mar 12 02:55:43.109973 kernel: SELinux: policy capability open_perms=1 Mar 12 02:55:43.109989 kernel: SELinux: policy capability extended_socket_class=1 Mar 12 02:55:43.109995 kernel: SELinux: policy capability always_check_network=0 Mar 12 02:55:43.110000 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 12 02:55:43.110008 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 12 02:55:43.110015 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 12 02:55:43.110020 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 12 02:55:43.110025 kernel: SELinux: policy capability userspace_initial_context=0 Mar 12 02:55:43.110031 systemd[1]: Successfully loaded SELinux policy in 169.728ms. Mar 12 02:55:43.110038 kernel: audit: type=1403 audit(1773284139.989:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 12 02:55:43.110046 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.329ms. Mar 12 02:55:43.110052 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 12 02:55:43.110058 systemd[1]: Detected virtualization microsoft. Mar 12 02:55:43.110065 systemd[1]: Detected architecture arm64. Mar 12 02:55:43.110071 systemd[1]: Detected first boot. Mar 12 02:55:43.110078 systemd[1]: Hostname set to . Mar 12 02:55:43.110086 systemd[1]: Initializing machine ID from random generator. Mar 12 02:55:43.110092 zram_generator::config[1311]: No configuration found. Mar 12 02:55:43.110098 kernel: NET: Registered PF_VSOCK protocol family Mar 12 02:55:43.110104 systemd[1]: Populated /etc with preset unit settings. Mar 12 02:55:43.110110 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 12 02:55:43.110117 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 12 02:55:43.110123 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 12 02:55:43.110129 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 12 02:55:43.110135 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 12 02:55:43.110142 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 12 02:55:43.110148 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 12 02:55:43.110154 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 12 02:55:43.110160 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 12 02:55:43.110167 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 12 02:55:43.110173 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 12 02:55:43.110179 systemd[1]: Created slice user.slice - User and Session Slice. Mar 12 02:55:43.110185 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 02:55:43.110191 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 02:55:43.110197 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 12 02:55:43.110203 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 12 02:55:43.110210 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 12 02:55:43.110217 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 12 02:55:43.110224 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 12 02:55:43.110232 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 02:55:43.110238 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 12 02:55:43.110244 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 12 02:55:43.110250 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 12 02:55:43.110256 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 12 02:55:43.110262 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 12 02:55:43.110269 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 02:55:43.110275 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 12 02:55:43.110281 systemd[1]: Reached target slices.target - Slice Units. Mar 12 02:55:43.110287 systemd[1]: Reached target swap.target - Swaps. Mar 12 02:55:43.110293 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 12 02:55:43.110299 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 12 02:55:43.110307 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 12 02:55:43.110313 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 12 02:55:43.110319 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 12 02:55:43.110325 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 02:55:43.110331 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 12 02:55:43.110338 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 12 02:55:43.110344 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 12 02:55:43.110352 systemd[1]: Mounting media.mount - External Media Directory... Mar 12 02:55:43.110358 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 12 02:55:43.110364 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 12 02:55:43.110370 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 12 02:55:43.110377 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 12 02:55:43.110383 systemd[1]: Reached target machines.target - Containers. Mar 12 02:55:43.110389 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 12 02:55:43.110396 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 02:55:43.110403 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 12 02:55:43.110409 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 12 02:55:43.110415 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 02:55:43.110421 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 12 02:55:43.110427 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 02:55:43.110434 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 12 02:55:43.110440 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 02:55:43.110446 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 12 02:55:43.110453 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 12 02:55:43.110460 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 12 02:55:43.110466 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 12 02:55:43.110472 systemd[1]: Stopped systemd-fsck-usr.service. Mar 12 02:55:43.110478 kernel: fuse: init (API version 7.41) Mar 12 02:55:43.110485 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 12 02:55:43.110492 kernel: loop: module loaded Mar 12 02:55:43.110497 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 12 02:55:43.110503 kernel: ACPI: bus type drm_connector registered Mar 12 02:55:43.110510 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 12 02:55:43.110516 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 12 02:55:43.110543 systemd-journald[1414]: Collecting audit messages is disabled. Mar 12 02:55:43.110559 systemd-journald[1414]: Journal started Mar 12 02:55:43.110574 systemd-journald[1414]: Runtime Journal (/run/log/journal/29ae42ffd4504f0d913635ef02f35af1) is 8M, max 78.3M, 70.3M free. Mar 12 02:55:42.372353 systemd[1]: Queued start job for default target multi-user.target. Mar 12 02:55:42.377328 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 12 02:55:42.377718 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 12 02:55:42.377992 systemd[1]: systemd-journald.service: Consumed 2.578s CPU time. Mar 12 02:55:43.121718 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 12 02:55:43.150893 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 12 02:55:43.167447 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 12 02:55:43.174380 systemd[1]: verity-setup.service: Deactivated successfully. Mar 12 02:55:43.174415 systemd[1]: Stopped verity-setup.service. Mar 12 02:55:43.191187 systemd[1]: Started systemd-journald.service - Journal Service. Mar 12 02:55:43.191969 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 12 02:55:43.198055 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 12 02:55:43.203059 systemd[1]: Mounted media.mount - External Media Directory. Mar 12 02:55:43.207184 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 12 02:55:43.212000 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 12 02:55:43.217183 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 12 02:55:43.221613 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 12 02:55:43.227314 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 02:55:43.233227 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 12 02:55:43.234879 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 12 02:55:43.240525 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 02:55:43.240658 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 02:55:43.246583 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 12 02:55:43.246710 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 12 02:55:43.252285 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 02:55:43.252409 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 02:55:43.259005 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 12 02:55:43.259122 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 12 02:55:43.264750 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 02:55:43.265032 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 02:55:43.271219 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 12 02:55:43.277649 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 12 02:55:43.283825 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 12 02:55:43.289747 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 12 02:55:43.301768 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 12 02:55:43.308956 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 12 02:55:43.318898 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 12 02:55:43.323576 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 12 02:55:43.323604 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 12 02:55:43.329178 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 12 02:55:43.335631 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 12 02:55:43.339992 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 02:55:43.350510 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 12 02:55:43.355870 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 12 02:55:43.360630 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 12 02:55:43.361397 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 12 02:55:43.366039 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 12 02:55:43.367995 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 02:55:43.375557 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 12 02:55:43.381938 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 12 02:55:43.389968 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 02:55:43.398239 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 12 02:55:43.406322 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 12 02:55:43.411678 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 12 02:55:43.418415 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 12 02:55:43.420091 systemd-journald[1414]: Time spent on flushing to /var/log/journal/29ae42ffd4504f0d913635ef02f35af1 is 13.258ms for 936 entries. Mar 12 02:55:43.420091 systemd-journald[1414]: System Journal (/var/log/journal/29ae42ffd4504f0d913635ef02f35af1) is 8M, max 2.6G, 2.6G free. Mar 12 02:55:43.475929 systemd-journald[1414]: Received client request to flush runtime journal. Mar 12 02:55:43.475987 kernel: loop0: detected capacity change from 0 to 100632 Mar 12 02:55:43.431989 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 12 02:55:43.478401 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 12 02:55:43.506387 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 12 02:55:43.507790 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 12 02:55:43.513321 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 02:55:43.539864 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 12 02:55:43.545653 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 12 02:55:43.620056 systemd-tmpfiles[1466]: ACLs are not supported, ignoring. Mar 12 02:55:43.620388 systemd-tmpfiles[1466]: ACLs are not supported, ignoring. Mar 12 02:55:43.623342 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 02:55:43.871837 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 12 02:55:43.908037 kernel: loop1: detected capacity change from 0 to 197488 Mar 12 02:55:43.918447 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 12 02:55:43.924794 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 02:55:43.951017 systemd-udevd[1473]: Using default interface naming scheme 'v255'. Mar 12 02:55:43.984843 kernel: loop2: detected capacity change from 0 to 119840 Mar 12 02:55:44.094476 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 02:55:44.106161 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 12 02:55:44.141295 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 12 02:55:44.153938 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 12 02:55:44.216845 kernel: mousedev: PS/2 mouse device common for all mice Mar 12 02:55:44.229875 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#170 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 12 02:55:44.281357 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 12 02:55:44.295827 kernel: hv_vmbus: registering driver hv_balloon Mar 12 02:55:44.302885 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 12 02:55:44.302942 kernel: hv_balloon: Memory hot add disabled on ARM64 Mar 12 02:55:44.307833 kernel: hv_vmbus: registering driver hyperv_fb Mar 12 02:55:44.327145 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 12 02:55:44.327204 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 12 02:55:44.332288 kernel: Console: switching to colour dummy device 80x25 Mar 12 02:55:44.341916 kernel: Console: switching to colour frame buffer device 128x48 Mar 12 02:55:44.378677 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 02:55:44.406895 kernel: loop3: detected capacity change from 0 to 27936 Mar 12 02:55:44.408371 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 02:55:44.408646 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 02:55:44.417639 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 02:55:44.436047 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 02:55:44.436486 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 02:55:44.451678 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 02:55:44.462831 kernel: MACsec IEEE 802.1AE Mar 12 02:55:44.464519 systemd-networkd[1498]: lo: Link UP Mar 12 02:55:44.464524 systemd-networkd[1498]: lo: Gained carrier Mar 12 02:55:44.465429 systemd-networkd[1498]: Enumeration completed Mar 12 02:55:44.465514 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 12 02:55:44.476520 systemd-networkd[1498]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 02:55:44.476527 systemd-networkd[1498]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 02:55:44.477108 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 12 02:55:44.486285 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 12 02:55:44.499023 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 12 02:55:44.508277 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 12 02:55:44.542229 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 12 02:55:44.548923 kernel: mlx5_core 4260:00:02.0 enP16992s1: Link up Mar 12 02:55:44.565832 kernel: hv_netvsc 000d3afc-36d7-000d-3afc-36d7000d3afc eth0: Data path switched to VF: enP16992s1 Mar 12 02:55:44.568120 systemd-networkd[1498]: enP16992s1: Link UP Mar 12 02:55:44.568255 systemd-networkd[1498]: eth0: Link UP Mar 12 02:55:44.568258 systemd-networkd[1498]: eth0: Gained carrier Mar 12 02:55:44.568276 systemd-networkd[1498]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 02:55:44.569153 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 12 02:55:44.577013 systemd-networkd[1498]: enP16992s1: Gained carrier Mar 12 02:55:44.584865 systemd-networkd[1498]: eth0: DHCPv4 address 10.200.20.34/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 12 02:55:44.813859 kernel: loop4: detected capacity change from 0 to 100632 Mar 12 02:55:44.826831 kernel: loop5: detected capacity change from 0 to 197488 Mar 12 02:55:44.841836 kernel: loop6: detected capacity change from 0 to 119840 Mar 12 02:55:44.853902 kernel: loop7: detected capacity change from 0 to 27936 Mar 12 02:55:44.862286 (sd-merge)[1618]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Mar 12 02:55:44.862654 (sd-merge)[1618]: Merged extensions into '/usr'. Mar 12 02:55:44.866011 systemd[1]: Reload requested from client PID 1451 ('systemd-sysext') (unit systemd-sysext.service)... Mar 12 02:55:44.866113 systemd[1]: Reloading... Mar 12 02:55:44.930841 zram_generator::config[1658]: No configuration found. Mar 12 02:55:45.086795 systemd[1]: Reloading finished in 220 ms. Mar 12 02:55:45.112762 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 02:55:45.118647 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 12 02:55:45.130733 systemd[1]: Starting ensure-sysext.service... Mar 12 02:55:45.136930 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 12 02:55:45.150057 systemd[1]: Reload requested from client PID 1706 ('systemctl') (unit ensure-sysext.service)... Mar 12 02:55:45.150144 systemd[1]: Reloading... Mar 12 02:55:45.150263 systemd-tmpfiles[1707]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 12 02:55:45.150284 systemd-tmpfiles[1707]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 12 02:55:45.150429 systemd-tmpfiles[1707]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 12 02:55:45.150558 systemd-tmpfiles[1707]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 12 02:55:45.151051 systemd-tmpfiles[1707]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 12 02:55:45.151190 systemd-tmpfiles[1707]: ACLs are not supported, ignoring. Mar 12 02:55:45.151220 systemd-tmpfiles[1707]: ACLs are not supported, ignoring. Mar 12 02:55:45.167290 systemd-tmpfiles[1707]: Detected autofs mount point /boot during canonicalization of boot. Mar 12 02:55:45.167302 systemd-tmpfiles[1707]: Skipping /boot Mar 12 02:55:45.172774 systemd-tmpfiles[1707]: Detected autofs mount point /boot during canonicalization of boot. Mar 12 02:55:45.172788 systemd-tmpfiles[1707]: Skipping /boot Mar 12 02:55:45.203830 zram_generator::config[1734]: No configuration found. Mar 12 02:55:45.373064 systemd[1]: Reloading finished in 222 ms. Mar 12 02:55:45.383200 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 02:55:45.402225 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 12 02:55:45.413522 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 12 02:55:45.418940 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 02:55:45.422010 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 02:55:45.429256 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 02:55:45.437361 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 02:55:45.442614 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 02:55:45.442709 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 12 02:55:45.445001 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 12 02:55:45.461977 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 12 02:55:45.467062 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 12 02:55:45.473179 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 02:55:45.473333 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 02:55:45.478837 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 02:55:45.478976 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 02:55:45.484960 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 02:55:45.486945 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 02:55:45.496800 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 02:55:45.498477 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 02:55:45.505024 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 02:55:45.516485 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 02:55:45.521682 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 02:55:45.521771 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 12 02:55:45.524397 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 02:55:45.524553 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 02:55:45.531064 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 02:55:45.532852 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 02:55:45.538747 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 02:55:45.538898 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 02:55:45.549658 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 02:55:45.552743 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 02:55:45.563327 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 12 02:55:45.576064 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 02:55:45.583695 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 02:55:45.588901 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 02:55:45.590794 augenrules[1836]: No rules Mar 12 02:55:45.590939 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 12 02:55:45.591041 systemd[1]: Reached target time-set.target - System Time Set. Mar 12 02:55:45.596646 systemd[1]: audit-rules.service: Deactivated successfully. Mar 12 02:55:45.596908 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 12 02:55:45.601665 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 12 02:55:45.607782 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 12 02:55:45.614014 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 02:55:45.614173 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 02:55:45.619641 systemd-resolved[1807]: Positive Trust Anchors: Mar 12 02:55:45.619654 systemd-resolved[1807]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 12 02:55:45.619673 systemd-resolved[1807]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 12 02:55:45.619741 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 12 02:55:45.619888 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 12 02:55:45.624672 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 02:55:45.624823 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 02:55:45.630385 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 02:55:45.630518 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 02:55:45.639014 systemd-resolved[1807]: Using system hostname 'ci-4459.2.4-n-4fd21a1aad'. Mar 12 02:55:45.640235 systemd[1]: Finished ensure-sysext.service. Mar 12 02:55:45.644058 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 12 02:55:45.651530 systemd[1]: Reached target network.target - Network. Mar 12 02:55:45.655565 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 12 02:55:45.660511 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 12 02:55:45.660572 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 12 02:55:46.133931 systemd-networkd[1498]: eth0: Gained IPv6LL Mar 12 02:55:46.136517 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 12 02:55:46.143002 systemd[1]: Reached target network-online.target - Network is Online. Mar 12 02:55:46.207295 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 12 02:55:46.213059 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 12 02:55:48.070070 ldconfig[1445]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 12 02:55:48.084235 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 12 02:55:48.090418 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 12 02:55:48.103070 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 12 02:55:48.108460 systemd[1]: Reached target sysinit.target - System Initialization. Mar 12 02:55:48.113767 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 12 02:55:48.119278 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 12 02:55:48.125322 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 12 02:55:48.130484 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 12 02:55:48.136100 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 12 02:55:48.141790 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 12 02:55:48.141837 systemd[1]: Reached target paths.target - Path Units. Mar 12 02:55:48.146534 systemd[1]: Reached target timers.target - Timer Units. Mar 12 02:55:48.165702 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 12 02:55:48.171902 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 12 02:55:48.177790 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 12 02:55:48.183983 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 12 02:55:48.189666 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 12 02:55:48.196694 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 12 02:55:48.201568 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 12 02:55:48.207438 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 12 02:55:48.212062 systemd[1]: Reached target sockets.target - Socket Units. Mar 12 02:55:48.216153 systemd[1]: Reached target basic.target - Basic System. Mar 12 02:55:48.220273 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 12 02:55:48.220297 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 12 02:55:48.222437 systemd[1]: Starting chronyd.service - NTP client/server... Mar 12 02:55:48.233900 systemd[1]: Starting containerd.service - containerd container runtime... Mar 12 02:55:48.241453 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 12 02:55:48.249111 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 12 02:55:48.259906 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 12 02:55:48.270959 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 12 02:55:48.276734 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 12 02:55:48.282138 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 12 02:55:48.287730 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Mar 12 02:55:48.292920 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Mar 12 02:55:48.294371 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 02:55:48.302134 jq[1864]: false Mar 12 02:55:48.307658 KVP[1866]: KVP starting; pid is:1866 Mar 12 02:55:48.302427 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 12 02:55:48.309568 chronyd[1856]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Mar 12 02:55:48.312847 kernel: hv_utils: KVP IC version 4.0 Mar 12 02:55:48.313354 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 12 02:55:48.314455 KVP[1866]: KVP LIC Version: 3.1 Mar 12 02:55:48.321959 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 12 02:55:48.328785 chronyd[1856]: Timezone right/UTC failed leap second check, ignoring Mar 12 02:55:48.328930 chronyd[1856]: Loaded seccomp filter (level 2) Mar 12 02:55:48.331636 extend-filesystems[1865]: Found /dev/sda6 Mar 12 02:55:48.335905 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 12 02:55:48.349953 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 12 02:55:48.357488 extend-filesystems[1865]: Found /dev/sda9 Mar 12 02:55:48.361740 extend-filesystems[1865]: Checking size of /dev/sda9 Mar 12 02:55:48.368488 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 12 02:55:48.376522 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 12 02:55:48.376945 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 12 02:55:48.377642 systemd[1]: Starting update-engine.service - Update Engine... Mar 12 02:55:48.391399 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 12 02:55:48.398478 jq[1896]: true Mar 12 02:55:48.399232 systemd[1]: Started chronyd.service - NTP client/server. Mar 12 02:55:48.406247 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 12 02:55:48.408929 extend-filesystems[1865]: Old size kept for /dev/sda9 Mar 12 02:55:48.417689 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 12 02:55:48.418297 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 12 02:55:48.418512 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 12 02:55:48.422132 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 12 02:55:48.430078 systemd[1]: motdgen.service: Deactivated successfully. Mar 12 02:55:48.430238 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 12 02:55:48.435645 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 12 02:55:48.446299 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 12 02:55:48.446468 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 12 02:55:48.457831 update_engine[1894]: I20260312 02:55:48.457504 1894 main.cc:92] Flatcar Update Engine starting Mar 12 02:55:48.473398 (ntainerd)[1908]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 12 02:55:48.476797 jq[1907]: true Mar 12 02:55:48.511628 systemd-logind[1890]: New seat seat0. Mar 12 02:55:48.515857 systemd-logind[1890]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 12 02:55:48.516022 systemd[1]: Started systemd-logind.service - User Login Management. Mar 12 02:55:48.524031 tar[1905]: linux-arm64/LICENSE Mar 12 02:55:48.524031 tar[1905]: linux-arm64/helm Mar 12 02:55:48.587104 dbus-daemon[1859]: [system] SELinux support is enabled Mar 12 02:55:48.587264 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 12 02:55:48.593547 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 12 02:55:48.594858 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 12 02:55:48.601219 update_engine[1894]: I20260312 02:55:48.601087 1894 update_check_scheduler.cc:74] Next update check in 10m53s Mar 12 02:55:48.603365 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 12 02:55:48.603468 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 12 02:55:48.612589 systemd[1]: Started update-engine.service - Update Engine. Mar 12 02:55:48.612871 dbus-daemon[1859]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 12 02:55:48.629702 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 12 02:55:48.664835 coreos-metadata[1858]: Mar 12 02:55:48.662 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 12 02:55:48.666095 coreos-metadata[1858]: Mar 12 02:55:48.665 INFO Fetch successful Mar 12 02:55:48.666095 coreos-metadata[1858]: Mar 12 02:55:48.666 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Mar 12 02:55:48.671143 coreos-metadata[1858]: Mar 12 02:55:48.671 INFO Fetch successful Mar 12 02:55:48.671143 coreos-metadata[1858]: Mar 12 02:55:48.671 INFO Fetching http://168.63.129.16/machine/f752a084-f7e4-474b-9638-47c2e4c870f4/e0b40a50%2D99b3%2D469d%2D89b4%2D45d3fef98036.%5Fci%2D4459.2.4%2Dn%2D4fd21a1aad?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Mar 12 02:55:48.674164 coreos-metadata[1858]: Mar 12 02:55:48.674 INFO Fetch successful Mar 12 02:55:48.674164 coreos-metadata[1858]: Mar 12 02:55:48.674 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Mar 12 02:55:48.678295 sshd_keygen[1895]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 12 02:55:48.682354 coreos-metadata[1858]: Mar 12 02:55:48.681 INFO Fetch successful Mar 12 02:55:48.716831 bash[1958]: Updated "/home/core/.ssh/authorized_keys" Mar 12 02:55:48.737197 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 12 02:55:48.745997 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 12 02:55:48.774616 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 12 02:55:48.782261 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 12 02:55:48.790114 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Mar 12 02:55:48.800845 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 12 02:55:48.809440 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 12 02:55:48.814086 systemd[1]: issuegen.service: Deactivated successfully. Mar 12 02:55:48.814282 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 12 02:55:48.823964 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 12 02:55:48.849918 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Mar 12 02:55:48.856333 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 12 02:55:48.869405 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 12 02:55:48.880056 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 12 02:55:48.888349 systemd[1]: Reached target getty.target - Login Prompts. Mar 12 02:55:48.923547 locksmithd[1983]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 12 02:55:48.982246 tar[1905]: linux-arm64/README.md Mar 12 02:55:48.995902 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 12 02:55:49.086805 containerd[1908]: time="2026-03-12T02:55:49Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 12 02:55:49.088059 containerd[1908]: time="2026-03-12T02:55:49.088029852Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 12 02:55:49.095270 containerd[1908]: time="2026-03-12T02:55:49.095236948Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.52µs" Mar 12 02:55:49.095270 containerd[1908]: time="2026-03-12T02:55:49.095264300Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 12 02:55:49.095347 containerd[1908]: time="2026-03-12T02:55:49.095278412Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 12 02:55:49.095435 containerd[1908]: time="2026-03-12T02:55:49.095416124Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 12 02:55:49.095435 containerd[1908]: time="2026-03-12T02:55:49.095433452Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 12 02:55:49.095470 containerd[1908]: time="2026-03-12T02:55:49.095451524Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 12 02:55:49.095507 containerd[1908]: time="2026-03-12T02:55:49.095493068Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 12 02:55:49.095507 containerd[1908]: time="2026-03-12T02:55:49.095504988Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 12 02:55:49.095694 containerd[1908]: time="2026-03-12T02:55:49.095675604Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 12 02:55:49.095710 containerd[1908]: time="2026-03-12T02:55:49.095692836Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 12 02:55:49.095710 containerd[1908]: time="2026-03-12T02:55:49.095700420Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 12 02:55:49.095710 containerd[1908]: time="2026-03-12T02:55:49.095706692Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 12 02:55:49.095780 containerd[1908]: time="2026-03-12T02:55:49.095767628Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 12 02:55:49.096416 containerd[1908]: time="2026-03-12T02:55:49.096394292Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 12 02:55:49.096435 containerd[1908]: time="2026-03-12T02:55:49.096427572Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 12 02:55:49.096448 containerd[1908]: time="2026-03-12T02:55:49.096435932Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 12 02:55:49.096465 containerd[1908]: time="2026-03-12T02:55:49.096458164Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 12 02:55:49.097514 containerd[1908]: time="2026-03-12T02:55:49.097491916Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 12 02:55:49.097585 containerd[1908]: time="2026-03-12T02:55:49.097569996Z" level=info msg="metadata content store policy set" policy=shared Mar 12 02:55:49.115259 containerd[1908]: time="2026-03-12T02:55:49.115182004Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 12 02:55:49.115259 containerd[1908]: time="2026-03-12T02:55:49.115242836Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 12 02:55:49.115259 containerd[1908]: time="2026-03-12T02:55:49.115254676Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 12 02:55:49.115334 containerd[1908]: time="2026-03-12T02:55:49.115264820Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 12 02:55:49.115334 containerd[1908]: time="2026-03-12T02:55:49.115273884Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 12 02:55:49.115334 containerd[1908]: time="2026-03-12T02:55:49.115280724Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 12 02:55:49.115334 containerd[1908]: time="2026-03-12T02:55:49.115295372Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 12 02:55:49.115871 containerd[1908]: time="2026-03-12T02:55:49.115303276Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 12 02:55:49.115897 containerd[1908]: time="2026-03-12T02:55:49.115878084Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 12 02:55:49.115897 containerd[1908]: time="2026-03-12T02:55:49.115891820Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 12 02:55:49.115935 containerd[1908]: time="2026-03-12T02:55:49.115900980Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 12 02:55:49.115935 containerd[1908]: time="2026-03-12T02:55:49.115917196Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 12 02:55:49.116092 containerd[1908]: time="2026-03-12T02:55:49.116073660Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 12 02:55:49.116112 containerd[1908]: time="2026-03-12T02:55:49.116101340Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 12 02:55:49.116136 containerd[1908]: time="2026-03-12T02:55:49.116111532Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 12 02:55:49.116136 containerd[1908]: time="2026-03-12T02:55:49.116120692Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 12 02:55:49.116999 containerd[1908]: time="2026-03-12T02:55:49.116127996Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 12 02:55:49.117021 containerd[1908]: time="2026-03-12T02:55:49.117006524Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 12 02:55:49.117021 containerd[1908]: time="2026-03-12T02:55:49.117017380Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 12 02:55:49.117086 containerd[1908]: time="2026-03-12T02:55:49.117026028Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 12 02:55:49.117086 containerd[1908]: time="2026-03-12T02:55:49.117034436Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 12 02:55:49.117086 containerd[1908]: time="2026-03-12T02:55:49.117054404Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 12 02:55:49.117086 containerd[1908]: time="2026-03-12T02:55:49.117062012Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 12 02:55:49.117135 containerd[1908]: time="2026-03-12T02:55:49.117127700Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 12 02:55:49.117147 containerd[1908]: time="2026-03-12T02:55:49.117138724Z" level=info msg="Start snapshots syncer" Mar 12 02:55:49.117180 containerd[1908]: time="2026-03-12T02:55:49.117160852Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 12 02:55:49.117427 containerd[1908]: time="2026-03-12T02:55:49.117392060Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 12 02:55:49.117767 containerd[1908]: time="2026-03-12T02:55:49.117744652Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 12 02:55:49.117840 containerd[1908]: time="2026-03-12T02:55:49.117823932Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 12 02:55:49.117972 containerd[1908]: time="2026-03-12T02:55:49.117942924Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 12 02:55:49.118418 containerd[1908]: time="2026-03-12T02:55:49.118333356Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 12 02:55:49.118418 containerd[1908]: time="2026-03-12T02:55:49.118350852Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 12 02:55:49.118418 containerd[1908]: time="2026-03-12T02:55:49.118362148Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 12 02:55:49.118418 containerd[1908]: time="2026-03-12T02:55:49.118371508Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 12 02:55:49.118418 containerd[1908]: time="2026-03-12T02:55:49.118379532Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 12 02:55:49.118418 containerd[1908]: time="2026-03-12T02:55:49.118386580Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 12 02:55:49.118418 containerd[1908]: time="2026-03-12T02:55:49.118413612Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 12 02:55:49.118418 containerd[1908]: time="2026-03-12T02:55:49.118422284Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 12 02:55:49.118539 containerd[1908]: time="2026-03-12T02:55:49.118429260Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 12 02:55:49.118539 containerd[1908]: time="2026-03-12T02:55:49.118455124Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 12 02:55:49.118539 containerd[1908]: time="2026-03-12T02:55:49.118465340Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 12 02:55:49.118539 containerd[1908]: time="2026-03-12T02:55:49.118476412Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 12 02:55:49.118539 containerd[1908]: time="2026-03-12T02:55:49.118483364Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 12 02:55:49.118539 containerd[1908]: time="2026-03-12T02:55:49.118487956Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 12 02:55:49.118539 containerd[1908]: time="2026-03-12T02:55:49.118496540Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 12 02:55:49.118539 containerd[1908]: time="2026-03-12T02:55:49.118503340Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 12 02:55:49.118539 containerd[1908]: time="2026-03-12T02:55:49.118516836Z" level=info msg="runtime interface created" Mar 12 02:55:49.118539 containerd[1908]: time="2026-03-12T02:55:49.118520092Z" level=info msg="created NRI interface" Mar 12 02:55:49.118539 containerd[1908]: time="2026-03-12T02:55:49.118525100Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 12 02:55:49.118539 containerd[1908]: time="2026-03-12T02:55:49.118532964Z" level=info msg="Connect containerd service" Mar 12 02:55:49.118694 containerd[1908]: time="2026-03-12T02:55:49.118569548Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 12 02:55:49.120775 containerd[1908]: time="2026-03-12T02:55:49.120747572Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 12 02:55:49.230712 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 02:55:49.299319 (kubelet)[2060]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 02:55:49.566726 containerd[1908]: time="2026-03-12T02:55:49.566633300Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 12 02:55:49.567482 containerd[1908]: time="2026-03-12T02:55:49.567384812Z" level=info msg="Start subscribing containerd event" Mar 12 02:55:49.567601 containerd[1908]: time="2026-03-12T02:55:49.567583924Z" level=info msg="Start recovering state" Mar 12 02:55:49.567756 containerd[1908]: time="2026-03-12T02:55:49.567671340Z" level=info msg="Start event monitor" Mar 12 02:55:49.567756 containerd[1908]: time="2026-03-12T02:55:49.567684924Z" level=info msg="Start cni network conf syncer for default" Mar 12 02:55:49.567756 containerd[1908]: time="2026-03-12T02:55:49.567692468Z" level=info msg="Start streaming server" Mar 12 02:55:49.567756 containerd[1908]: time="2026-03-12T02:55:49.567700060Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 12 02:55:49.567756 containerd[1908]: time="2026-03-12T02:55:49.567705260Z" level=info msg="runtime interface starting up..." Mar 12 02:55:49.567756 containerd[1908]: time="2026-03-12T02:55:49.567709212Z" level=info msg="starting plugins..." Mar 12 02:55:49.567756 containerd[1908]: time="2026-03-12T02:55:49.567469756Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 12 02:55:49.567756 containerd[1908]: time="2026-03-12T02:55:49.567721132Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 12 02:55:49.568214 systemd[1]: Started containerd.service - containerd container runtime. Mar 12 02:55:49.573432 containerd[1908]: time="2026-03-12T02:55:49.573412540Z" level=info msg="containerd successfully booted in 0.486929s" Mar 12 02:55:49.573599 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 12 02:55:49.579897 systemd[1]: Startup finished in 1.648s (kernel) + 12.101s (initrd) + 9.758s (userspace) = 23.509s. Mar 12 02:55:49.614916 kubelet[2060]: E0312 02:55:49.614871 2060 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 02:55:49.616858 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 02:55:49.616958 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 02:55:49.617589 systemd[1]: kubelet.service: Consumed 494ms CPU time, 245.8M memory peak. Mar 12 02:55:49.902936 login[2042]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Mar 12 02:55:49.903141 login[2041]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:55:49.914024 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 12 02:55:49.914835 systemd-logind[1890]: New session 2 of user core. Mar 12 02:55:49.915460 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 12 02:55:49.948840 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 12 02:55:49.951374 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 12 02:55:49.958404 (systemd)[2084]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 12 02:55:49.960063 systemd-logind[1890]: New session c1 of user core. Mar 12 02:55:50.079511 systemd[2084]: Queued start job for default target default.target. Mar 12 02:55:50.083502 systemd[2084]: Created slice app.slice - User Application Slice. Mar 12 02:55:50.083523 systemd[2084]: Reached target paths.target - Paths. Mar 12 02:55:50.083550 systemd[2084]: Reached target timers.target - Timers. Mar 12 02:55:50.084454 systemd[2084]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 12 02:55:50.091936 systemd[2084]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 12 02:55:50.091979 systemd[2084]: Reached target sockets.target - Sockets. Mar 12 02:55:50.092013 systemd[2084]: Reached target basic.target - Basic System. Mar 12 02:55:50.092035 systemd[2084]: Reached target default.target - Main User Target. Mar 12 02:55:50.092053 systemd[2084]: Startup finished in 127ms. Mar 12 02:55:50.092133 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 12 02:55:50.100931 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 12 02:55:50.458807 waagent[2039]: 2026-03-12T02:55:50.458737Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Mar 12 02:55:50.467921 waagent[2039]: 2026-03-12T02:55:50.464131Z INFO Daemon Daemon OS: flatcar 4459.2.4 Mar 12 02:55:50.468264 waagent[2039]: 2026-03-12T02:55:50.468228Z INFO Daemon Daemon Python: 3.11.13 Mar 12 02:55:50.472266 waagent[2039]: 2026-03-12T02:55:50.472229Z INFO Daemon Daemon Run daemon Mar 12 02:55:50.475859 waagent[2039]: 2026-03-12T02:55:50.475828Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.2.4' Mar 12 02:55:50.483449 waagent[2039]: 2026-03-12T02:55:50.483417Z INFO Daemon Daemon Using waagent for provisioning Mar 12 02:55:50.488376 waagent[2039]: 2026-03-12T02:55:50.488345Z INFO Daemon Daemon Activate resource disk Mar 12 02:55:50.492499 waagent[2039]: 2026-03-12T02:55:50.492469Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 12 02:55:50.501636 waagent[2039]: 2026-03-12T02:55:50.501599Z INFO Daemon Daemon Found device: None Mar 12 02:55:50.505898 waagent[2039]: 2026-03-12T02:55:50.505867Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 12 02:55:50.513253 waagent[2039]: 2026-03-12T02:55:50.513220Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 12 02:55:50.523436 waagent[2039]: 2026-03-12T02:55:50.523399Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 12 02:55:50.528463 waagent[2039]: 2026-03-12T02:55:50.528431Z INFO Daemon Daemon Running default provisioning handler Mar 12 02:55:50.538254 waagent[2039]: 2026-03-12T02:55:50.538221Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Mar 12 02:55:50.550603 waagent[2039]: 2026-03-12T02:55:50.550570Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 12 02:55:50.559246 waagent[2039]: 2026-03-12T02:55:50.559213Z INFO Daemon Daemon cloud-init is enabled: False Mar 12 02:55:50.563648 waagent[2039]: 2026-03-12T02:55:50.563615Z INFO Daemon Daemon Copying ovf-env.xml Mar 12 02:55:50.627938 waagent[2039]: 2026-03-12T02:55:50.627875Z INFO Daemon Daemon Successfully mounted dvd Mar 12 02:55:50.652742 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 12 02:55:50.659231 waagent[2039]: 2026-03-12T02:55:50.655143Z INFO Daemon Daemon Detect protocol endpoint Mar 12 02:55:50.659530 waagent[2039]: 2026-03-12T02:55:50.659496Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 12 02:55:50.663976 waagent[2039]: 2026-03-12T02:55:50.663945Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 12 02:55:50.669364 waagent[2039]: 2026-03-12T02:55:50.669335Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 12 02:55:50.673770 waagent[2039]: 2026-03-12T02:55:50.673739Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 12 02:55:50.678257 waagent[2039]: 2026-03-12T02:55:50.678227Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 12 02:55:50.719334 waagent[2039]: 2026-03-12T02:55:50.719223Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 12 02:55:50.725404 waagent[2039]: 2026-03-12T02:55:50.725373Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 12 02:55:50.730182 waagent[2039]: 2026-03-12T02:55:50.730147Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 12 02:55:50.904371 login[2042]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:55:50.909494 systemd-logind[1890]: New session 1 of user core. Mar 12 02:55:50.913912 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 12 02:55:51.044535 waagent[2039]: 2026-03-12T02:55:51.044389Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 12 02:55:51.049400 waagent[2039]: 2026-03-12T02:55:51.049349Z INFO Daemon Daemon Forcing an update of the goal state. Mar 12 02:55:51.058164 waagent[2039]: 2026-03-12T02:55:51.058125Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 12 02:55:51.078335 waagent[2039]: 2026-03-12T02:55:51.078297Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Mar 12 02:55:51.082862 waagent[2039]: 2026-03-12T02:55:51.082804Z INFO Daemon Mar 12 02:55:51.085342 waagent[2039]: 2026-03-12T02:55:51.085311Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 334ad824-9363-44f4-afb8-f547cb18a856 eTag: 15933916184958484236 source: Fabric] Mar 12 02:55:51.094622 waagent[2039]: 2026-03-12T02:55:51.094588Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Mar 12 02:55:51.099595 waagent[2039]: 2026-03-12T02:55:51.099567Z INFO Daemon Mar 12 02:55:51.102385 waagent[2039]: 2026-03-12T02:55:51.102357Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Mar 12 02:55:51.111518 waagent[2039]: 2026-03-12T02:55:51.111491Z INFO Daemon Daemon Downloading artifacts profile blob Mar 12 02:55:51.168559 waagent[2039]: 2026-03-12T02:55:51.168501Z INFO Daemon Downloaded certificate {'thumbprint': '1195AD466DF09621E235E0B197B72B2832E5CB8D', 'hasPrivateKey': True} Mar 12 02:55:51.176466 waagent[2039]: 2026-03-12T02:55:51.176429Z INFO Daemon Fetch goal state completed Mar 12 02:55:51.186723 waagent[2039]: 2026-03-12T02:55:51.186692Z INFO Daemon Daemon Starting provisioning Mar 12 02:55:51.190743 waagent[2039]: 2026-03-12T02:55:51.190714Z INFO Daemon Daemon Handle ovf-env.xml. Mar 12 02:55:51.194848 waagent[2039]: 2026-03-12T02:55:51.194825Z INFO Daemon Daemon Set hostname [ci-4459.2.4-n-4fd21a1aad] Mar 12 02:55:51.202115 waagent[2039]: 2026-03-12T02:55:51.202072Z INFO Daemon Daemon Publish hostname [ci-4459.2.4-n-4fd21a1aad] Mar 12 02:55:51.207659 waagent[2039]: 2026-03-12T02:55:51.207625Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 12 02:55:51.213421 waagent[2039]: 2026-03-12T02:55:51.213391Z INFO Daemon Daemon Primary interface is [eth0] Mar 12 02:55:51.223943 systemd-networkd[1498]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 02:55:51.223950 systemd-networkd[1498]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 02:55:51.223978 systemd-networkd[1498]: eth0: DHCP lease lost Mar 12 02:55:51.224567 waagent[2039]: 2026-03-12T02:55:51.224481Z INFO Daemon Daemon Create user account if not exists Mar 12 02:55:51.229574 waagent[2039]: 2026-03-12T02:55:51.229541Z INFO Daemon Daemon User core already exists, skip useradd Mar 12 02:55:51.234735 waagent[2039]: 2026-03-12T02:55:51.234704Z INFO Daemon Daemon Configure sudoer Mar 12 02:55:51.255650 waagent[2039]: 2026-03-12T02:55:51.255596Z INFO Daemon Daemon Configure sshd Mar 12 02:55:51.264756 waagent[2039]: 2026-03-12T02:55:51.264714Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Mar 12 02:55:51.265855 systemd-networkd[1498]: eth0: DHCPv4 address 10.200.20.34/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 12 02:55:51.274556 waagent[2039]: 2026-03-12T02:55:51.274512Z INFO Daemon Daemon Deploy ssh public key. Mar 12 02:55:52.386793 waagent[2039]: 2026-03-12T02:55:52.386747Z INFO Daemon Daemon Provisioning complete Mar 12 02:55:52.400161 waagent[2039]: 2026-03-12T02:55:52.400124Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 12 02:55:52.405249 waagent[2039]: 2026-03-12T02:55:52.405218Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 12 02:55:52.413172 waagent[2039]: 2026-03-12T02:55:52.413141Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Mar 12 02:55:52.509891 waagent[2134]: 2026-03-12T02:55:52.509799Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Mar 12 02:55:52.510160 waagent[2134]: 2026-03-12T02:55:52.509944Z INFO ExtHandler ExtHandler OS: flatcar 4459.2.4 Mar 12 02:55:52.510160 waagent[2134]: 2026-03-12T02:55:52.509982Z INFO ExtHandler ExtHandler Python: 3.11.13 Mar 12 02:55:52.510160 waagent[2134]: 2026-03-12T02:55:52.510016Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Mar 12 02:55:52.549186 waagent[2134]: 2026-03-12T02:55:52.549127Z INFO ExtHandler ExtHandler Distro: flatcar-4459.2.4; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Mar 12 02:55:52.549312 waagent[2134]: 2026-03-12T02:55:52.549284Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 12 02:55:52.549346 waagent[2134]: 2026-03-12T02:55:52.549331Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 12 02:55:52.555008 waagent[2134]: 2026-03-12T02:55:52.554965Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 12 02:55:52.559899 waagent[2134]: 2026-03-12T02:55:52.559869Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Mar 12 02:55:52.560247 waagent[2134]: 2026-03-12T02:55:52.560216Z INFO ExtHandler Mar 12 02:55:52.560297 waagent[2134]: 2026-03-12T02:55:52.560280Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 1df4fa38-91ac-4e0e-81c9-9c121c5116f9 eTag: 15933916184958484236 source: Fabric] Mar 12 02:55:52.560510 waagent[2134]: 2026-03-12T02:55:52.560485Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 12 02:55:52.560929 waagent[2134]: 2026-03-12T02:55:52.560897Z INFO ExtHandler Mar 12 02:55:52.560967 waagent[2134]: 2026-03-12T02:55:52.560951Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 12 02:55:52.564156 waagent[2134]: 2026-03-12T02:55:52.564134Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 12 02:55:52.619341 waagent[2134]: 2026-03-12T02:55:52.619277Z INFO ExtHandler Downloaded certificate {'thumbprint': '1195AD466DF09621E235E0B197B72B2832E5CB8D', 'hasPrivateKey': True} Mar 12 02:55:52.619694 waagent[2134]: 2026-03-12T02:55:52.619658Z INFO ExtHandler Fetch goal state completed Mar 12 02:55:52.631707 waagent[2134]: 2026-03-12T02:55:52.631661Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.4 27 Jan 2026 (Library: OpenSSL 3.4.4 27 Jan 2026) Mar 12 02:55:52.634926 waagent[2134]: 2026-03-12T02:55:52.634882Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2134 Mar 12 02:55:52.635024 waagent[2134]: 2026-03-12T02:55:52.634998Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Mar 12 02:55:52.635256 waagent[2134]: 2026-03-12T02:55:52.635229Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Mar 12 02:55:52.636329 waagent[2134]: 2026-03-12T02:55:52.636295Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.2.4', '', 'Flatcar Container Linux by Kinvolk'] Mar 12 02:55:52.636629 waagent[2134]: 2026-03-12T02:55:52.636600Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.2.4', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Mar 12 02:55:52.636739 waagent[2134]: 2026-03-12T02:55:52.636716Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Mar 12 02:55:52.637206 waagent[2134]: 2026-03-12T02:55:52.637152Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 12 02:55:52.682925 waagent[2134]: 2026-03-12T02:55:52.682892Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 12 02:55:52.683078 waagent[2134]: 2026-03-12T02:55:52.683050Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 12 02:55:52.687413 waagent[2134]: 2026-03-12T02:55:52.687371Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 12 02:55:52.691323 systemd[1]: Reload requested from client PID 2149 ('systemctl') (unit waagent.service)... Mar 12 02:55:52.691337 systemd[1]: Reloading... Mar 12 02:55:52.748850 zram_generator::config[2191]: No configuration found. Mar 12 02:55:52.892420 systemd[1]: Reloading finished in 200 ms. Mar 12 02:55:52.920844 waagent[2134]: 2026-03-12T02:55:52.919585Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Mar 12 02:55:52.920844 waagent[2134]: 2026-03-12T02:55:52.919719Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Mar 12 02:55:53.161312 waagent[2134]: 2026-03-12T02:55:53.160469Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 12 02:55:53.161312 waagent[2134]: 2026-03-12T02:55:53.160774Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Mar 12 02:55:53.161488 waagent[2134]: 2026-03-12T02:55:53.161443Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 12 02:55:53.161545 waagent[2134]: 2026-03-12T02:55:53.161509Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 12 02:55:53.161602 waagent[2134]: 2026-03-12T02:55:53.161579Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 12 02:55:53.161759 waagent[2134]: 2026-03-12T02:55:53.161733Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 12 02:55:53.162089 waagent[2134]: 2026-03-12T02:55:53.162056Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 12 02:55:53.162208 waagent[2134]: 2026-03-12T02:55:53.162177Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 12 02:55:53.162208 waagent[2134]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 12 02:55:53.162208 waagent[2134]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Mar 12 02:55:53.162208 waagent[2134]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 12 02:55:53.162208 waagent[2134]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 12 02:55:53.162208 waagent[2134]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 12 02:55:53.162208 waagent[2134]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 12 02:55:53.162576 waagent[2134]: 2026-03-12T02:55:53.162546Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 12 02:55:53.162670 waagent[2134]: 2026-03-12T02:55:53.162631Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 12 02:55:53.163124 waagent[2134]: 2026-03-12T02:55:53.163095Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 12 02:55:53.163174 waagent[2134]: 2026-03-12T02:55:53.163154Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 12 02:55:53.163324 waagent[2134]: 2026-03-12T02:55:53.163247Z INFO EnvHandler ExtHandler Configure routes Mar 12 02:55:53.163324 waagent[2134]: 2026-03-12T02:55:53.163294Z INFO EnvHandler ExtHandler Gateway:None Mar 12 02:55:53.163324 waagent[2134]: 2026-03-12T02:55:53.163319Z INFO EnvHandler ExtHandler Routes:None Mar 12 02:55:53.163593 waagent[2134]: 2026-03-12T02:55:53.163569Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 12 02:55:53.163713 waagent[2134]: 2026-03-12T02:55:53.163673Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 12 02:55:53.163779 waagent[2134]: 2026-03-12T02:55:53.163758Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 12 02:55:53.170885 waagent[2134]: 2026-03-12T02:55:53.169710Z INFO ExtHandler ExtHandler Mar 12 02:55:53.170885 waagent[2134]: 2026-03-12T02:55:53.169773Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: ff8e0c4d-372f-49d0-9303-1c0ccd15a9d4 correlation ec1aa073-2a3f-4d6f-98c2-7a98dc9f1d19 created: 2026-03-12T02:54:58.495178Z] Mar 12 02:55:53.170885 waagent[2134]: 2026-03-12T02:55:53.170028Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 12 02:55:53.170885 waagent[2134]: 2026-03-12T02:55:53.170399Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Mar 12 02:55:53.195361 waagent[2134]: 2026-03-12T02:55:53.195321Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Mar 12 02:55:53.195361 waagent[2134]: Try `iptables -h' or 'iptables --help' for more information.) Mar 12 02:55:53.195758 waagent[2134]: 2026-03-12T02:55:53.195730Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 55C75F81-FFFA-41F4-9ED9-E44D29A07BE8;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Mar 12 02:55:53.222338 waagent[2134]: 2026-03-12T02:55:53.222279Z INFO MonitorHandler ExtHandler Network interfaces: Mar 12 02:55:53.222338 waagent[2134]: Executing ['ip', '-a', '-o', 'link']: Mar 12 02:55:53.222338 waagent[2134]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 12 02:55:53.222338 waagent[2134]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fc:36:d7 brd ff:ff:ff:ff:ff:ff Mar 12 02:55:53.222338 waagent[2134]: 3: enP16992s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fc:36:d7 brd ff:ff:ff:ff:ff:ff\ altname enP16992p0s2 Mar 12 02:55:53.222338 waagent[2134]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 12 02:55:53.222338 waagent[2134]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 12 02:55:53.222338 waagent[2134]: 2: eth0 inet 10.200.20.34/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 12 02:55:53.222338 waagent[2134]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 12 02:55:53.222338 waagent[2134]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Mar 12 02:55:53.222338 waagent[2134]: 2: eth0 inet6 fe80::20d:3aff:fefc:36d7/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 12 02:55:53.271788 waagent[2134]: 2026-03-12T02:55:53.271181Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Mar 12 02:55:53.271788 waagent[2134]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 12 02:55:53.271788 waagent[2134]: pkts bytes target prot opt in out source destination Mar 12 02:55:53.271788 waagent[2134]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 12 02:55:53.271788 waagent[2134]: pkts bytes target prot opt in out source destination Mar 12 02:55:53.271788 waagent[2134]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 12 02:55:53.271788 waagent[2134]: pkts bytes target prot opt in out source destination Mar 12 02:55:53.271788 waagent[2134]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 12 02:55:53.271788 waagent[2134]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 12 02:55:53.271788 waagent[2134]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 12 02:55:53.273414 waagent[2134]: 2026-03-12T02:55:53.273384Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 12 02:55:53.273414 waagent[2134]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 12 02:55:53.273414 waagent[2134]: pkts bytes target prot opt in out source destination Mar 12 02:55:53.273414 waagent[2134]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 12 02:55:53.273414 waagent[2134]: pkts bytes target prot opt in out source destination Mar 12 02:55:53.273414 waagent[2134]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 12 02:55:53.273414 waagent[2134]: pkts bytes target prot opt in out source destination Mar 12 02:55:53.273414 waagent[2134]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 12 02:55:53.273414 waagent[2134]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 12 02:55:53.273414 waagent[2134]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 12 02:55:53.273778 waagent[2134]: 2026-03-12T02:55:53.273756Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 12 02:55:59.820530 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 12 02:55:59.822214 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 02:55:59.924644 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 02:55:59.927531 (kubelet)[2283]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 02:56:00.056012 kubelet[2283]: E0312 02:56:00.055967 2283 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 02:56:00.058589 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 02:56:00.058701 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 02:56:00.059251 systemd[1]: kubelet.service: Consumed 206ms CPU time, 105.5M memory peak. Mar 12 02:56:10.071126 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 12 02:56:10.073060 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 02:56:10.407663 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 02:56:10.412174 (kubelet)[2299]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 02:56:10.433254 kubelet[2299]: E0312 02:56:10.433223 2299 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 02:56:10.435067 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 02:56:10.435170 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 02:56:10.435594 systemd[1]: kubelet.service: Consumed 98ms CPU time, 106.9M memory peak. Mar 12 02:56:12.134240 chronyd[1856]: Selected source PHC0 Mar 12 02:56:14.547703 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 12 02:56:14.550171 systemd[1]: Started sshd@0-10.200.20.34:22-10.200.16.10:60224.service - OpenSSH per-connection server daemon (10.200.16.10:60224). Mar 12 02:56:15.121178 sshd[2306]: Accepted publickey for core from 10.200.16.10 port 60224 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:56:15.122219 sshd-session[2306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:56:15.125549 systemd-logind[1890]: New session 3 of user core. Mar 12 02:56:15.135928 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 12 02:56:15.443314 systemd[1]: Started sshd@1-10.200.20.34:22-10.200.16.10:60228.service - OpenSSH per-connection server daemon (10.200.16.10:60228). Mar 12 02:56:15.865891 sshd[2312]: Accepted publickey for core from 10.200.16.10 port 60228 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:56:15.867046 sshd-session[2312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:56:15.870956 systemd-logind[1890]: New session 4 of user core. Mar 12 02:56:15.878092 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 12 02:56:16.099327 sshd[2315]: Connection closed by 10.200.16.10 port 60228 Mar 12 02:56:16.099854 sshd-session[2312]: pam_unix(sshd:session): session closed for user core Mar 12 02:56:16.103288 systemd[1]: sshd@1-10.200.20.34:22-10.200.16.10:60228.service: Deactivated successfully. Mar 12 02:56:16.104620 systemd[1]: session-4.scope: Deactivated successfully. Mar 12 02:56:16.106419 systemd-logind[1890]: Session 4 logged out. Waiting for processes to exit. Mar 12 02:56:16.107459 systemd-logind[1890]: Removed session 4. Mar 12 02:56:16.198910 systemd[1]: Started sshd@2-10.200.20.34:22-10.200.16.10:60234.service - OpenSSH per-connection server daemon (10.200.16.10:60234). Mar 12 02:56:16.620793 sshd[2321]: Accepted publickey for core from 10.200.16.10 port 60234 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:56:16.621802 sshd-session[2321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:56:16.625203 systemd-logind[1890]: New session 5 of user core. Mar 12 02:56:16.633934 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 12 02:56:16.851832 sshd[2324]: Connection closed by 10.200.16.10 port 60234 Mar 12 02:56:16.851723 sshd-session[2321]: pam_unix(sshd:session): session closed for user core Mar 12 02:56:16.855186 systemd[1]: sshd@2-10.200.20.34:22-10.200.16.10:60234.service: Deactivated successfully. Mar 12 02:56:16.856481 systemd[1]: session-5.scope: Deactivated successfully. Mar 12 02:56:16.858035 systemd-logind[1890]: Session 5 logged out. Waiting for processes to exit. Mar 12 02:56:16.858740 systemd-logind[1890]: Removed session 5. Mar 12 02:56:16.938131 systemd[1]: Started sshd@3-10.200.20.34:22-10.200.16.10:60242.service - OpenSSH per-connection server daemon (10.200.16.10:60242). Mar 12 02:56:17.357523 sshd[2330]: Accepted publickey for core from 10.200.16.10 port 60242 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:56:17.358570 sshd-session[2330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:56:17.361776 systemd-logind[1890]: New session 6 of user core. Mar 12 02:56:17.373019 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 12 02:56:17.591956 sshd[2333]: Connection closed by 10.200.16.10 port 60242 Mar 12 02:56:17.592474 sshd-session[2330]: pam_unix(sshd:session): session closed for user core Mar 12 02:56:17.595608 systemd[1]: sshd@3-10.200.20.34:22-10.200.16.10:60242.service: Deactivated successfully. Mar 12 02:56:17.597311 systemd[1]: session-6.scope: Deactivated successfully. Mar 12 02:56:17.597984 systemd-logind[1890]: Session 6 logged out. Waiting for processes to exit. Mar 12 02:56:17.599275 systemd-logind[1890]: Removed session 6. Mar 12 02:56:17.678142 systemd[1]: Started sshd@4-10.200.20.34:22-10.200.16.10:60254.service - OpenSSH per-connection server daemon (10.200.16.10:60254). Mar 12 02:56:18.094964 sshd[2339]: Accepted publickey for core from 10.200.16.10 port 60254 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:56:18.096050 sshd-session[2339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:56:18.099564 systemd-logind[1890]: New session 7 of user core. Mar 12 02:56:18.107912 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 12 02:56:18.356175 sudo[2343]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 12 02:56:18.356387 sudo[2343]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 02:56:18.367312 sudo[2343]: pam_unix(sudo:session): session closed for user root Mar 12 02:56:18.444042 sshd[2342]: Connection closed by 10.200.16.10 port 60254 Mar 12 02:56:18.444626 sshd-session[2339]: pam_unix(sshd:session): session closed for user core Mar 12 02:56:18.447795 systemd[1]: sshd@4-10.200.20.34:22-10.200.16.10:60254.service: Deactivated successfully. Mar 12 02:56:18.449081 systemd[1]: session-7.scope: Deactivated successfully. Mar 12 02:56:18.449656 systemd-logind[1890]: Session 7 logged out. Waiting for processes to exit. Mar 12 02:56:18.450613 systemd-logind[1890]: Removed session 7. Mar 12 02:56:18.532163 systemd[1]: Started sshd@5-10.200.20.34:22-10.200.16.10:60260.service - OpenSSH per-connection server daemon (10.200.16.10:60260). Mar 12 02:56:18.952259 sshd[2349]: Accepted publickey for core from 10.200.16.10 port 60260 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:56:18.953347 sshd-session[2349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:56:18.956747 systemd-logind[1890]: New session 8 of user core. Mar 12 02:56:18.964922 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 12 02:56:19.110208 sudo[2354]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 12 02:56:19.110416 sudo[2354]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 02:56:19.117624 sudo[2354]: pam_unix(sudo:session): session closed for user root Mar 12 02:56:19.121181 sudo[2353]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 12 02:56:19.121610 sudo[2353]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 02:56:19.128668 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 12 02:56:19.161954 augenrules[2376]: No rules Mar 12 02:56:19.163106 systemd[1]: audit-rules.service: Deactivated successfully. Mar 12 02:56:19.163880 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 12 02:56:19.165236 sudo[2353]: pam_unix(sudo:session): session closed for user root Mar 12 02:56:19.243304 sshd[2352]: Connection closed by 10.200.16.10 port 60260 Mar 12 02:56:19.243714 sshd-session[2349]: pam_unix(sshd:session): session closed for user core Mar 12 02:56:19.246316 systemd-logind[1890]: Session 8 logged out. Waiting for processes to exit. Mar 12 02:56:19.247258 systemd[1]: sshd@5-10.200.20.34:22-10.200.16.10:60260.service: Deactivated successfully. Mar 12 02:56:19.248755 systemd[1]: session-8.scope: Deactivated successfully. Mar 12 02:56:19.250749 systemd-logind[1890]: Removed session 8. Mar 12 02:56:19.332006 systemd[1]: Started sshd@6-10.200.20.34:22-10.200.16.10:60266.service - OpenSSH per-connection server daemon (10.200.16.10:60266). Mar 12 02:56:19.762540 sshd[2385]: Accepted publickey for core from 10.200.16.10 port 60266 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:56:19.763667 sshd-session[2385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:56:19.767964 systemd-logind[1890]: New session 9 of user core. Mar 12 02:56:19.773943 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 12 02:56:19.921665 sudo[2389]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 12 02:56:19.921881 sudo[2389]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 02:56:20.570561 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 12 02:56:20.571881 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 02:56:21.115525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 02:56:21.118254 (kubelet)[2414]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 02:56:21.142770 kubelet[2414]: E0312 02:56:21.142738 2414 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 02:56:21.144374 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 02:56:21.144475 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 02:56:21.144869 systemd[1]: kubelet.service: Consumed 100ms CPU time, 107M memory peak. Mar 12 02:56:21.437227 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 12 02:56:21.449038 (dockerd)[2421]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 12 02:56:22.223407 dockerd[2421]: time="2026-03-12T02:56:22.223357001Z" level=info msg="Starting up" Mar 12 02:56:22.224006 dockerd[2421]: time="2026-03-12T02:56:22.223983023Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 12 02:56:22.231657 dockerd[2421]: time="2026-03-12T02:56:22.231626975Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 12 02:56:22.364978 dockerd[2421]: time="2026-03-12T02:56:22.364945481Z" level=info msg="Loading containers: start." Mar 12 02:56:22.376852 kernel: Initializing XFRM netlink socket Mar 12 02:56:22.666710 systemd-networkd[1498]: docker0: Link UP Mar 12 02:56:22.683393 dockerd[2421]: time="2026-03-12T02:56:22.683357570Z" level=info msg="Loading containers: done." Mar 12 02:56:22.692286 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2183904350-merged.mount: Deactivated successfully. Mar 12 02:56:22.706380 dockerd[2421]: time="2026-03-12T02:56:22.706340724Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 12 02:56:22.706500 dockerd[2421]: time="2026-03-12T02:56:22.706411647Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 12 02:56:22.706500 dockerd[2421]: time="2026-03-12T02:56:22.706487401Z" level=info msg="Initializing buildkit" Mar 12 02:56:22.766774 dockerd[2421]: time="2026-03-12T02:56:22.766599702Z" level=info msg="Completed buildkit initialization" Mar 12 02:56:22.769688 dockerd[2421]: time="2026-03-12T02:56:22.769657602Z" level=info msg="Daemon has completed initialization" Mar 12 02:56:22.769920 dockerd[2421]: time="2026-03-12T02:56:22.769886403Z" level=info msg="API listen on /run/docker.sock" Mar 12 02:56:22.770046 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 12 02:56:23.327436 containerd[1908]: time="2026-03-12T02:56:23.327172044Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 12 02:56:24.097131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1274601847.mount: Deactivated successfully. Mar 12 02:56:25.297024 containerd[1908]: time="2026-03-12T02:56:25.296967487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:25.300806 containerd[1908]: time="2026-03-12T02:56:25.300778991Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=24701796" Mar 12 02:56:25.304048 containerd[1908]: time="2026-03-12T02:56:25.304024379Z" level=info msg="ImageCreate event name:\"sha256:713a7d5fc5ed8383c9ffe550e487150c9818d05f0c4c012688fbb27885fcc7bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:25.309148 containerd[1908]: time="2026-03-12T02:56:25.309121456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:25.309723 containerd[1908]: time="2026-03-12T02:56:25.309691732Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:713a7d5fc5ed8383c9ffe550e487150c9818d05f0c4c012688fbb27885fcc7bf\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"24698395\" in 1.982484783s" Mar 12 02:56:25.309741 containerd[1908]: time="2026-03-12T02:56:25.309730046Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:713a7d5fc5ed8383c9ffe550e487150c9818d05f0c4c012688fbb27885fcc7bf\"" Mar 12 02:56:25.310252 containerd[1908]: time="2026-03-12T02:56:25.310226863Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 12 02:56:26.351918 containerd[1908]: time="2026-03-12T02:56:26.351869790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:26.354834 containerd[1908]: time="2026-03-12T02:56:26.354731540Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=19063039" Mar 12 02:56:26.358378 containerd[1908]: time="2026-03-12T02:56:26.358340412Z" level=info msg="ImageCreate event name:\"sha256:6137f51959af5f0a4da7fb6c0bd868f615a534c02d42e303ad6fb31345ee4854\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:26.363565 containerd[1908]: time="2026-03-12T02:56:26.363533909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:26.364138 containerd[1908]: time="2026-03-12T02:56:26.363997286Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:6137f51959af5f0a4da7fb6c0bd868f615a534c02d42e303ad6fb31345ee4854\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"20675140\" in 1.053741478s" Mar 12 02:56:26.364138 containerd[1908]: time="2026-03-12T02:56:26.364023807Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:6137f51959af5f0a4da7fb6c0bd868f615a534c02d42e303ad6fb31345ee4854\"" Mar 12 02:56:26.364443 containerd[1908]: time="2026-03-12T02:56:26.364417085Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 12 02:56:27.306241 containerd[1908]: time="2026-03-12T02:56:27.306142989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:27.309216 containerd[1908]: time="2026-03-12T02:56:27.309193530Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=13797901" Mar 12 02:56:27.312905 containerd[1908]: time="2026-03-12T02:56:27.312870589Z" level=info msg="ImageCreate event name:\"sha256:6ad431b09accba3ccc8ac6df4b239aa11c7adf8ee0a477b9f0b54cf9f083f8c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:27.317900 containerd[1908]: time="2026-03-12T02:56:27.317864767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:27.318637 containerd[1908]: time="2026-03-12T02:56:27.318160721Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:6ad431b09accba3ccc8ac6df4b239aa11c7adf8ee0a477b9f0b54cf9f083f8c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"15410020\" in 953.653073ms" Mar 12 02:56:27.318637 containerd[1908]: time="2026-03-12T02:56:27.318188914Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:6ad431b09accba3ccc8ac6df4b239aa11c7adf8ee0a477b9f0b54cf9f083f8c6\"" Mar 12 02:56:27.318987 containerd[1908]: time="2026-03-12T02:56:27.318971094Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 12 02:56:28.268739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3804483536.mount: Deactivated successfully. Mar 12 02:56:28.463008 containerd[1908]: time="2026-03-12T02:56:28.462957223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:28.467667 containerd[1908]: time="2026-03-12T02:56:28.467548351Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=22329583" Mar 12 02:56:28.470900 containerd[1908]: time="2026-03-12T02:56:28.470878406Z" level=info msg="ImageCreate event name:\"sha256:df7dcaf93e84e5dfbe96b2f86588b38a8959748d9c84b2e0532e2b5ae1bc5884\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:28.475875 containerd[1908]: time="2026-03-12T02:56:28.475535169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:28.475875 containerd[1908]: time="2026-03-12T02:56:28.475771147Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:df7dcaf93e84e5dfbe96b2f86588b38a8959748d9c84b2e0532e2b5ae1bc5884\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"22328602\" in 1.156659816s" Mar 12 02:56:28.475875 containerd[1908]: time="2026-03-12T02:56:28.475796141Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:df7dcaf93e84e5dfbe96b2f86588b38a8959748d9c84b2e0532e2b5ae1bc5884\"" Mar 12 02:56:28.476298 containerd[1908]: time="2026-03-12T02:56:28.476272378Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 12 02:56:29.238146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3287037231.mount: Deactivated successfully. Mar 12 02:56:30.238319 containerd[1908]: time="2026-03-12T02:56:30.238264640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:30.242355 containerd[1908]: time="2026-03-12T02:56:30.242133455Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=21172211" Mar 12 02:56:30.245787 containerd[1908]: time="2026-03-12T02:56:30.245762636Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:30.253979 containerd[1908]: time="2026-03-12T02:56:30.253950286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:30.254869 containerd[1908]: time="2026-03-12T02:56:30.254667127Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"21168808\" in 1.778369588s" Mar 12 02:56:30.254869 containerd[1908]: time="2026-03-12T02:56:30.254691632Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\"" Mar 12 02:56:30.255399 containerd[1908]: time="2026-03-12T02:56:30.255345126Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 12 02:56:31.240038 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 12 02:56:31.241105 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 02:56:31.242367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2985186502.mount: Deactivated successfully. Mar 12 02:56:31.264005 containerd[1908]: time="2026-03-12T02:56:31.263970342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:31.266911 containerd[1908]: time="2026-03-12T02:56:31.266883578Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Mar 12 02:56:31.275636 containerd[1908]: time="2026-03-12T02:56:31.275594364Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:31.336340 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 02:56:31.338792 (kubelet)[2770]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 02:56:31.361399 kubelet[2770]: E0312 02:56:31.361348 2770 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 02:56:31.363084 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 02:56:31.363272 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 02:56:31.363728 systemd[1]: kubelet.service: Consumed 97ms CPU time, 105.1M memory peak. Mar 12 02:56:31.504302 containerd[1908]: time="2026-03-12T02:56:31.503466678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:31.504302 containerd[1908]: time="2026-03-12T02:56:31.503979182Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 1.248611255s" Mar 12 02:56:31.504302 containerd[1908]: time="2026-03-12T02:56:31.504002663Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Mar 12 02:56:31.504823 containerd[1908]: time="2026-03-12T02:56:31.504788538Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 12 02:56:32.383602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2967476730.mount: Deactivated successfully. Mar 12 02:56:32.455839 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Mar 12 02:56:33.351704 containerd[1908]: time="2026-03-12T02:56:33.351636828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:33.356418 containerd[1908]: time="2026-03-12T02:56:33.356386435Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=21738165" Mar 12 02:56:33.360122 containerd[1908]: time="2026-03-12T02:56:33.360088107Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:33.367842 containerd[1908]: time="2026-03-12T02:56:33.367343171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:56:33.367946 containerd[1908]: time="2026-03-12T02:56:33.367913125Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"21749640\" in 1.862972244s" Mar 12 02:56:33.367946 containerd[1908]: time="2026-03-12T02:56:33.367942319Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\"" Mar 12 02:56:34.081913 update_engine[1894]: I20260312 02:56:34.081842 1894 update_attempter.cc:509] Updating boot flags... Mar 12 02:56:34.333852 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 02:56:34.333955 systemd[1]: kubelet.service: Consumed 97ms CPU time, 105.1M memory peak. Mar 12 02:56:34.335762 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 02:56:34.357899 systemd[1]: Reload requested from client PID 2928 ('systemctl') (unit session-9.scope)... Mar 12 02:56:34.357910 systemd[1]: Reloading... Mar 12 02:56:34.448850 zram_generator::config[2975]: No configuration found. Mar 12 02:56:34.610630 systemd[1]: Reloading finished in 252 ms. Mar 12 02:56:34.652475 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 12 02:56:34.652667 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 12 02:56:34.652952 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 02:56:34.653072 systemd[1]: kubelet.service: Consumed 75ms CPU time, 95M memory peak. Mar 12 02:56:34.654130 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 02:56:34.883359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 02:56:34.890009 (kubelet)[3042]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 12 02:56:34.914183 kubelet[3042]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 02:56:35.068154 kubelet[3042]: I0312 02:56:35.068095 3042 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 12 02:56:35.068154 kubelet[3042]: I0312 02:56:35.068141 3042 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 02:56:35.069283 kubelet[3042]: I0312 02:56:35.069261 3042 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 12 02:56:35.069283 kubelet[3042]: I0312 02:56:35.069280 3042 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 12 02:56:35.069514 kubelet[3042]: I0312 02:56:35.069499 3042 server.go:951] "Client rotation is on, will bootstrap in background" Mar 12 02:56:35.418231 kubelet[3042]: E0312 02:56:35.418181 3042 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 12 02:56:35.418231 kubelet[3042]: I0312 02:56:35.418266 3042 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 02:56:35.422246 kubelet[3042]: I0312 02:56:35.422210 3042 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 12 02:56:35.424972 kubelet[3042]: I0312 02:56:35.424905 3042 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 12 02:56:35.425199 kubelet[3042]: I0312 02:56:35.425176 3042 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 02:56:35.425365 kubelet[3042]: I0312 02:56:35.425250 3042 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.4-n-4fd21a1aad","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 02:56:35.425478 kubelet[3042]: I0312 02:56:35.425468 3042 topology_manager.go:143] "Creating topology manager with none policy" Mar 12 02:56:35.425770 kubelet[3042]: I0312 02:56:35.425522 3042 container_manager_linux.go:308] "Creating device plugin manager" Mar 12 02:56:35.425770 kubelet[3042]: I0312 02:56:35.425616 3042 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 12 02:56:35.430736 kubelet[3042]: I0312 02:56:35.430716 3042 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 12 02:56:35.431243 kubelet[3042]: I0312 02:56:35.430943 3042 kubelet.go:482] "Attempting to sync node with API server" Mar 12 02:56:35.431325 kubelet[3042]: I0312 02:56:35.431315 3042 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 02:56:35.431384 kubelet[3042]: I0312 02:56:35.431377 3042 kubelet.go:394] "Adding apiserver pod source" Mar 12 02:56:35.431438 kubelet[3042]: I0312 02:56:35.431429 3042 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 02:56:35.435006 kubelet[3042]: I0312 02:56:35.434951 3042 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 12 02:56:35.435542 kubelet[3042]: I0312 02:56:35.435520 3042 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 12 02:56:35.435573 kubelet[3042]: I0312 02:56:35.435549 3042 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 12 02:56:35.435591 kubelet[3042]: W0312 02:56:35.435579 3042 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 12 02:56:35.437865 kubelet[3042]: I0312 02:56:35.437752 3042 server.go:1257] "Started kubelet" Mar 12 02:56:35.439077 kubelet[3042]: I0312 02:56:35.438986 3042 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 12 02:56:35.442515 kubelet[3042]: E0312 02:56:35.441724 3042 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.34:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.34:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.4-n-4fd21a1aad.189bf8949ae132c0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.4-n-4fd21a1aad,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.4-n-4fd21a1aad,},FirstTimestamp:2026-03-12 02:56:35.437720256 +0000 UTC m=+0.545304397,LastTimestamp:2026-03-12 02:56:35.437720256 +0000 UTC m=+0.545304397,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.4-n-4fd21a1aad,}" Mar 12 02:56:35.443040 kubelet[3042]: I0312 02:56:35.442946 3042 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 02:56:35.443641 kubelet[3042]: I0312 02:56:35.443626 3042 server.go:317] "Adding debug handlers to kubelet server" Mar 12 02:56:35.446044 kubelet[3042]: I0312 02:56:35.446027 3042 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 12 02:56:35.446220 kubelet[3042]: E0312 02:56:35.446198 3042 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4459.2.4-n-4fd21a1aad\" not found" Mar 12 02:56:35.446348 kubelet[3042]: I0312 02:56:35.446313 3042 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 02:56:35.446410 kubelet[3042]: I0312 02:56:35.446401 3042 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 12 02:56:35.446583 kubelet[3042]: I0312 02:56:35.446570 3042 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 02:56:35.446869 kubelet[3042]: I0312 02:56:35.446856 3042 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 12 02:56:35.448069 kubelet[3042]: I0312 02:56:35.447900 3042 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 12 02:56:35.448069 kubelet[3042]: I0312 02:56:35.447939 3042 reconciler.go:29] "Reconciler: start to sync state" Mar 12 02:56:35.448834 kubelet[3042]: E0312 02:56:35.448297 3042 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.4-n-4fd21a1aad?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="200ms" Mar 12 02:56:35.448834 kubelet[3042]: I0312 02:56:35.448607 3042 factory.go:223] Registration of the systemd container factory successfully Mar 12 02:56:35.449611 kubelet[3042]: I0312 02:56:35.449593 3042 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 12 02:56:35.451122 kubelet[3042]: I0312 02:56:35.451108 3042 factory.go:223] Registration of the containerd container factory successfully Mar 12 02:56:35.472185 kubelet[3042]: I0312 02:56:35.472161 3042 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 12 02:56:35.472444 kubelet[3042]: I0312 02:56:35.472421 3042 cpu_manager.go:225] "Starting" policy="none" Mar 12 02:56:35.472492 kubelet[3042]: I0312 02:56:35.472460 3042 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 12 02:56:35.472492 kubelet[3042]: I0312 02:56:35.472475 3042 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 12 02:56:35.474003 kubelet[3042]: I0312 02:56:35.473990 3042 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 12 02:56:35.474371 kubelet[3042]: I0312 02:56:35.474295 3042 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 12 02:56:35.474453 kubelet[3042]: I0312 02:56:35.474443 3042 kubelet.go:2501] "Starting kubelet main sync loop" Mar 12 02:56:35.474533 kubelet[3042]: E0312 02:56:35.474521 3042 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 02:56:35.479210 kubelet[3042]: I0312 02:56:35.479192 3042 policy_none.go:50] "Start" Mar 12 02:56:35.479210 kubelet[3042]: I0312 02:56:35.479210 3042 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 12 02:56:35.479291 kubelet[3042]: I0312 02:56:35.479218 3042 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 12 02:56:35.489665 kubelet[3042]: I0312 02:56:35.489641 3042 policy_none.go:44] "Start" Mar 12 02:56:35.493040 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 12 02:56:35.506103 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 12 02:56:35.508317 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 12 02:56:35.515421 kubelet[3042]: E0312 02:56:35.515402 3042 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 12 02:56:35.516074 kubelet[3042]: I0312 02:56:35.515664 3042 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 12 02:56:35.516074 kubelet[3042]: I0312 02:56:35.515676 3042 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 02:56:35.516074 kubelet[3042]: I0312 02:56:35.515888 3042 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 12 02:56:35.517420 kubelet[3042]: E0312 02:56:35.517404 3042 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 12 02:56:35.517512 kubelet[3042]: E0312 02:56:35.517503 3042 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.4-n-4fd21a1aad\" not found" Mar 12 02:56:35.586617 systemd[1]: Created slice kubepods-burstable-pod9aede371496d52f5ede7bc29e391eaf9.slice - libcontainer container kubepods-burstable-pod9aede371496d52f5ede7bc29e391eaf9.slice. Mar 12 02:56:35.598178 kubelet[3042]: E0312 02:56:35.597762 3042 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-4fd21a1aad\" not found" node="ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:35.601418 systemd[1]: Created slice kubepods-burstable-pod0129ffccab21975a381ab4724dfbb1e6.slice - libcontainer container kubepods-burstable-pod0129ffccab21975a381ab4724dfbb1e6.slice. Mar 12 02:56:35.605733 kubelet[3042]: E0312 02:56:35.605717 3042 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-4fd21a1aad\" not found" node="ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:35.607398 systemd[1]: Created slice kubepods-burstable-pode99a172f8bdb41b67e0da45f56098e0d.slice - libcontainer container kubepods-burstable-pode99a172f8bdb41b67e0da45f56098e0d.slice. Mar 12 02:56:35.608895 kubelet[3042]: E0312 02:56:35.608876 3042 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-4fd21a1aad\" not found" node="ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:35.616930 kubelet[3042]: I0312 02:56:35.616910 3042 kubelet_node_status.go:74] "Attempting to register node" node="ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:35.617242 kubelet[3042]: E0312 02:56:35.617221 3042 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:35.648740 kubelet[3042]: E0312 02:56:35.648708 3042 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.4-n-4fd21a1aad?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="400ms" Mar 12 02:56:35.649893 kubelet[3042]: I0312 02:56:35.649874 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e99a172f8bdb41b67e0da45f56098e0d-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.4-n-4fd21a1aad\" (UID: \"e99a172f8bdb41b67e0da45f56098e0d\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:35.650017 kubelet[3042]: I0312 02:56:35.649896 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e99a172f8bdb41b67e0da45f56098e0d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.4-n-4fd21a1aad\" (UID: \"e99a172f8bdb41b67e0da45f56098e0d\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:35.650017 kubelet[3042]: I0312 02:56:35.649908 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9aede371496d52f5ede7bc29e391eaf9-kubeconfig\") pod \"kube-scheduler-ci-4459.2.4-n-4fd21a1aad\" (UID: \"9aede371496d52f5ede7bc29e391eaf9\") " pod="kube-system/kube-scheduler-ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:35.650017 kubelet[3042]: I0312 02:56:35.649918 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0129ffccab21975a381ab4724dfbb1e6-k8s-certs\") pod \"kube-apiserver-ci-4459.2.4-n-4fd21a1aad\" (UID: \"0129ffccab21975a381ab4724dfbb1e6\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:35.650017 kubelet[3042]: I0312 02:56:35.649926 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e99a172f8bdb41b67e0da45f56098e0d-ca-certs\") pod \"kube-controller-manager-ci-4459.2.4-n-4fd21a1aad\" (UID: \"e99a172f8bdb41b67e0da45f56098e0d\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:35.650017 kubelet[3042]: I0312 02:56:35.649938 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e99a172f8bdb41b67e0da45f56098e0d-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.4-n-4fd21a1aad\" (UID: \"e99a172f8bdb41b67e0da45f56098e0d\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:35.650137 kubelet[3042]: I0312 02:56:35.649968 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e99a172f8bdb41b67e0da45f56098e0d-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.4-n-4fd21a1aad\" (UID: \"e99a172f8bdb41b67e0da45f56098e0d\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:35.650137 kubelet[3042]: I0312 02:56:35.649979 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0129ffccab21975a381ab4724dfbb1e6-ca-certs\") pod \"kube-apiserver-ci-4459.2.4-n-4fd21a1aad\" (UID: \"0129ffccab21975a381ab4724dfbb1e6\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:35.650242 kubelet[3042]: I0312 02:56:35.650206 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0129ffccab21975a381ab4724dfbb1e6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.4-n-4fd21a1aad\" (UID: \"0129ffccab21975a381ab4724dfbb1e6\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:35.819630 kubelet[3042]: I0312 02:56:35.819052 3042 kubelet_node_status.go:74] "Attempting to register node" node="ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:35.819630 kubelet[3042]: E0312 02:56:35.819390 3042 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:36.049966 kubelet[3042]: E0312 02:56:36.049924 3042 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.4-n-4fd21a1aad?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="800ms" Mar 12 02:56:36.220973 kubelet[3042]: I0312 02:56:36.220925 3042 kubelet_node_status.go:74] "Attempting to register node" node="ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:36.221288 kubelet[3042]: E0312 02:56:36.221265 3042 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:36.359563 containerd[1908]: time="2026-03-12T02:56:36.359520373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.4-n-4fd21a1aad,Uid:9aede371496d52f5ede7bc29e391eaf9,Namespace:kube-system,Attempt:0,}" Mar 12 02:56:36.365173 containerd[1908]: time="2026-03-12T02:56:36.365142377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.4-n-4fd21a1aad,Uid:0129ffccab21975a381ab4724dfbb1e6,Namespace:kube-system,Attempt:0,}" Mar 12 02:56:36.371629 containerd[1908]: time="2026-03-12T02:56:36.371601848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.4-n-4fd21a1aad,Uid:e99a172f8bdb41b67e0da45f56098e0d,Namespace:kube-system,Attempt:0,}" Mar 12 02:56:36.851122 kubelet[3042]: E0312 02:56:36.851073 3042 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.4-n-4fd21a1aad?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="1.6s" Mar 12 02:56:36.903749 kubelet[3042]: E0312 02:56:36.903638 3042 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.34:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.34:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.4-n-4fd21a1aad.189bf8949ae132c0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.4-n-4fd21a1aad,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.4-n-4fd21a1aad,},FirstTimestamp:2026-03-12 02:56:35.437720256 +0000 UTC m=+0.545304397,LastTimestamp:2026-03-12 02:56:35.437720256 +0000 UTC m=+0.545304397,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.4-n-4fd21a1aad,}" Mar 12 02:56:37.023324 kubelet[3042]: I0312 02:56:37.023298 3042 kubelet_node_status.go:74] "Attempting to register node" node="ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:37.023555 kubelet[3042]: E0312 02:56:37.023533 3042 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:37.546248 kubelet[3042]: E0312 02:56:37.546211 3042 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 12 02:56:37.734069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3068889844.mount: Deactivated successfully. Mar 12 02:56:37.756193 containerd[1908]: time="2026-03-12T02:56:37.756148357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 02:56:37.767861 containerd[1908]: time="2026-03-12T02:56:37.767834279Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Mar 12 02:56:37.775388 containerd[1908]: time="2026-03-12T02:56:37.775352395Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 02:56:37.778844 containerd[1908]: time="2026-03-12T02:56:37.778579066Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 02:56:37.785145 containerd[1908]: time="2026-03-12T02:56:37.785116860Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 12 02:56:37.789555 containerd[1908]: time="2026-03-12T02:56:37.789522869Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 02:56:37.792654 containerd[1908]: time="2026-03-12T02:56:37.792630184Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 12 02:56:37.796094 containerd[1908]: time="2026-03-12T02:56:37.796036967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 02:56:37.797730 containerd[1908]: time="2026-03-12T02:56:37.796332283Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 1.431607524s" Mar 12 02:56:37.800182 containerd[1908]: time="2026-03-12T02:56:37.800160732Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 1.417881307s" Mar 12 02:56:37.807167 containerd[1908]: time="2026-03-12T02:56:37.807144873Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 1.431401947s" Mar 12 02:56:37.871789 containerd[1908]: time="2026-03-12T02:56:37.871756399Z" level=info msg="connecting to shim 853e91cfbc1068973bf38276cbf73ddc84e743a080b94e673a538d5181329fd8" address="unix:///run/containerd/s/6dea70829b749469cf811822a2cba5840aeafcae2786a873964a2f9643bde1ec" namespace=k8s.io protocol=ttrpc version=3 Mar 12 02:56:37.887041 containerd[1908]: time="2026-03-12T02:56:37.887005583Z" level=info msg="connecting to shim 6baacf3106bc0f456d6cc528d3173dbe21f7ba20b8a790481dc1bf1248933d7d" address="unix:///run/containerd/s/fc8f8b392274a64c93b3d6a88873d4f0de0fd2d2a09fee46e53b6b7ef47da39b" namespace=k8s.io protocol=ttrpc version=3 Mar 12 02:56:37.892502 containerd[1908]: time="2026-03-12T02:56:37.892467020Z" level=info msg="connecting to shim 6eaf65c47f60ebc856f3a8a23950481c9553d74a2d0e668688c4f700a662ad26" address="unix:///run/containerd/s/4215ad1b12ee1aadae6183ae21c880af154ae11f28772d1a709f9575acde2bc8" namespace=k8s.io protocol=ttrpc version=3 Mar 12 02:56:37.897947 systemd[1]: Started cri-containerd-853e91cfbc1068973bf38276cbf73ddc84e743a080b94e673a538d5181329fd8.scope - libcontainer container 853e91cfbc1068973bf38276cbf73ddc84e743a080b94e673a538d5181329fd8. Mar 12 02:56:37.914003 systemd[1]: Started cri-containerd-6baacf3106bc0f456d6cc528d3173dbe21f7ba20b8a790481dc1bf1248933d7d.scope - libcontainer container 6baacf3106bc0f456d6cc528d3173dbe21f7ba20b8a790481dc1bf1248933d7d. Mar 12 02:56:37.916868 systemd[1]: Started cri-containerd-6eaf65c47f60ebc856f3a8a23950481c9553d74a2d0e668688c4f700a662ad26.scope - libcontainer container 6eaf65c47f60ebc856f3a8a23950481c9553d74a2d0e668688c4f700a662ad26. Mar 12 02:56:37.957493 containerd[1908]: time="2026-03-12T02:56:37.957457835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.4-n-4fd21a1aad,Uid:e99a172f8bdb41b67e0da45f56098e0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"6baacf3106bc0f456d6cc528d3173dbe21f7ba20b8a790481dc1bf1248933d7d\"" Mar 12 02:56:37.968733 containerd[1908]: time="2026-03-12T02:56:37.968698138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.4-n-4fd21a1aad,Uid:9aede371496d52f5ede7bc29e391eaf9,Namespace:kube-system,Attempt:0,} returns sandbox id \"853e91cfbc1068973bf38276cbf73ddc84e743a080b94e673a538d5181329fd8\"" Mar 12 02:56:37.970487 containerd[1908]: time="2026-03-12T02:56:37.970462980Z" level=info msg="CreateContainer within sandbox \"6baacf3106bc0f456d6cc528d3173dbe21f7ba20b8a790481dc1bf1248933d7d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 12 02:56:37.972073 containerd[1908]: time="2026-03-12T02:56:37.972041894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.4-n-4fd21a1aad,Uid:0129ffccab21975a381ab4724dfbb1e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"6eaf65c47f60ebc856f3a8a23950481c9553d74a2d0e668688c4f700a662ad26\"" Mar 12 02:56:37.976440 containerd[1908]: time="2026-03-12T02:56:37.976057935Z" level=info msg="CreateContainer within sandbox \"853e91cfbc1068973bf38276cbf73ddc84e743a080b94e673a538d5181329fd8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 12 02:56:37.984225 containerd[1908]: time="2026-03-12T02:56:37.984203557Z" level=info msg="CreateContainer within sandbox \"6eaf65c47f60ebc856f3a8a23950481c9553d74a2d0e668688c4f700a662ad26\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 12 02:56:38.004750 containerd[1908]: time="2026-03-12T02:56:38.004729122Z" level=info msg="Container 589d46280af6f33f5138037b230202a0686de23bb1b42a15b04e567bb731e043: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:56:38.017008 containerd[1908]: time="2026-03-12T02:56:38.016963027Z" level=info msg="Container dffdc3e1f9dc7a5db7825098719de84bfa3eb393c1e10b20e512751941930a6b: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:56:38.034225 containerd[1908]: time="2026-03-12T02:56:38.034194022Z" level=info msg="CreateContainer within sandbox \"6baacf3106bc0f456d6cc528d3173dbe21f7ba20b8a790481dc1bf1248933d7d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"589d46280af6f33f5138037b230202a0686de23bb1b42a15b04e567bb731e043\"" Mar 12 02:56:38.034831 containerd[1908]: time="2026-03-12T02:56:38.034682154Z" level=info msg="StartContainer for \"589d46280af6f33f5138037b230202a0686de23bb1b42a15b04e567bb731e043\"" Mar 12 02:56:38.035908 containerd[1908]: time="2026-03-12T02:56:38.035890349Z" level=info msg="connecting to shim 589d46280af6f33f5138037b230202a0686de23bb1b42a15b04e567bb731e043" address="unix:///run/containerd/s/fc8f8b392274a64c93b3d6a88873d4f0de0fd2d2a09fee46e53b6b7ef47da39b" protocol=ttrpc version=3 Mar 12 02:56:38.039338 containerd[1908]: time="2026-03-12T02:56:38.039318405Z" level=info msg="Container 762bf681dc81df309e129036cf5c52a3fdb7c732d3e649858c58aade1f3ac52b: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:56:38.051934 systemd[1]: Started cri-containerd-589d46280af6f33f5138037b230202a0686de23bb1b42a15b04e567bb731e043.scope - libcontainer container 589d46280af6f33f5138037b230202a0686de23bb1b42a15b04e567bb731e043. Mar 12 02:56:38.055510 containerd[1908]: time="2026-03-12T02:56:38.055447673Z" level=info msg="CreateContainer within sandbox \"853e91cfbc1068973bf38276cbf73ddc84e743a080b94e673a538d5181329fd8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dffdc3e1f9dc7a5db7825098719de84bfa3eb393c1e10b20e512751941930a6b\"" Mar 12 02:56:38.056125 containerd[1908]: time="2026-03-12T02:56:38.055985464Z" level=info msg="StartContainer for \"dffdc3e1f9dc7a5db7825098719de84bfa3eb393c1e10b20e512751941930a6b\"" Mar 12 02:56:38.057186 containerd[1908]: time="2026-03-12T02:56:38.057167418Z" level=info msg="connecting to shim dffdc3e1f9dc7a5db7825098719de84bfa3eb393c1e10b20e512751941930a6b" address="unix:///run/containerd/s/6dea70829b749469cf811822a2cba5840aeafcae2786a873964a2f9643bde1ec" protocol=ttrpc version=3 Mar 12 02:56:38.069847 containerd[1908]: time="2026-03-12T02:56:38.069802068Z" level=info msg="CreateContainer within sandbox \"6eaf65c47f60ebc856f3a8a23950481c9553d74a2d0e668688c4f700a662ad26\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"762bf681dc81df309e129036cf5c52a3fdb7c732d3e649858c58aade1f3ac52b\"" Mar 12 02:56:38.071136 containerd[1908]: time="2026-03-12T02:56:38.070208077Z" level=info msg="StartContainer for \"762bf681dc81df309e129036cf5c52a3fdb7c732d3e649858c58aade1f3ac52b\"" Mar 12 02:56:38.071136 containerd[1908]: time="2026-03-12T02:56:38.070831823Z" level=info msg="connecting to shim 762bf681dc81df309e129036cf5c52a3fdb7c732d3e649858c58aade1f3ac52b" address="unix:///run/containerd/s/4215ad1b12ee1aadae6183ae21c880af154ae11f28772d1a709f9575acde2bc8" protocol=ttrpc version=3 Mar 12 02:56:38.073945 systemd[1]: Started cri-containerd-dffdc3e1f9dc7a5db7825098719de84bfa3eb393c1e10b20e512751941930a6b.scope - libcontainer container dffdc3e1f9dc7a5db7825098719de84bfa3eb393c1e10b20e512751941930a6b. Mar 12 02:56:38.098075 systemd[1]: Started cri-containerd-762bf681dc81df309e129036cf5c52a3fdb7c732d3e649858c58aade1f3ac52b.scope - libcontainer container 762bf681dc81df309e129036cf5c52a3fdb7c732d3e649858c58aade1f3ac52b. Mar 12 02:56:38.110161 containerd[1908]: time="2026-03-12T02:56:38.110138424Z" level=info msg="StartContainer for \"589d46280af6f33f5138037b230202a0686de23bb1b42a15b04e567bb731e043\" returns successfully" Mar 12 02:56:38.141547 containerd[1908]: time="2026-03-12T02:56:38.141441081Z" level=info msg="StartContainer for \"dffdc3e1f9dc7a5db7825098719de84bfa3eb393c1e10b20e512751941930a6b\" returns successfully" Mar 12 02:56:38.142826 containerd[1908]: time="2026-03-12T02:56:38.142780601Z" level=info msg="StartContainer for \"762bf681dc81df309e129036cf5c52a3fdb7c732d3e649858c58aade1f3ac52b\" returns successfully" Mar 12 02:56:38.484841 kubelet[3042]: E0312 02:56:38.484665 3042 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-4fd21a1aad\" not found" node="ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:38.485194 kubelet[3042]: E0312 02:56:38.485176 3042 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-4fd21a1aad\" not found" node="ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:38.489115 kubelet[3042]: E0312 02:56:38.489011 3042 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-4fd21a1aad\" not found" node="ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:38.626972 kubelet[3042]: I0312 02:56:38.626956 3042 kubelet_node_status.go:74] "Attempting to register node" node="ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:39.099608 kubelet[3042]: E0312 02:56:39.099579 3042 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.2.4-n-4fd21a1aad\" not found" node="ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:39.192974 kubelet[3042]: I0312 02:56:39.192941 3042 kubelet_node_status.go:77] "Successfully registered node" node="ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:39.193101 kubelet[3042]: E0312 02:56:39.193077 3042 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"ci-4459.2.4-n-4fd21a1aad\": node \"ci-4459.2.4-n-4fd21a1aad\" not found" Mar 12 02:56:39.246712 kubelet[3042]: I0312 02:56:39.246675 3042 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:39.256765 kubelet[3042]: E0312 02:56:39.256729 3042 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.4-n-4fd21a1aad\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:39.256765 kubelet[3042]: I0312 02:56:39.256753 3042 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:39.258418 kubelet[3042]: E0312 02:56:39.258397 3042 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.4-n-4fd21a1aad\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:39.258418 kubelet[3042]: I0312 02:56:39.258417 3042 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:39.259558 kubelet[3042]: E0312 02:56:39.259539 3042 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.4-n-4fd21a1aad\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:39.435774 kubelet[3042]: I0312 02:56:39.435076 3042 apiserver.go:52] "Watching apiserver" Mar 12 02:56:39.448449 kubelet[3042]: I0312 02:56:39.448422 3042 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 12 02:56:39.489696 kubelet[3042]: I0312 02:56:39.489532 3042 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:39.489696 kubelet[3042]: I0312 02:56:39.489614 3042 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:39.491114 kubelet[3042]: E0312 02:56:39.491098 3042 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.4-n-4fd21a1aad\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:39.491874 kubelet[3042]: E0312 02:56:39.491855 3042 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.4-n-4fd21a1aad\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:41.432440 systemd[1]: Reload requested from client PID 3326 ('systemctl') (unit session-9.scope)... Mar 12 02:56:41.432698 systemd[1]: Reloading... Mar 12 02:56:41.503877 zram_generator::config[3376]: No configuration found. Mar 12 02:56:41.648220 kubelet[3042]: I0312 02:56:41.648010 3042 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:41.657037 kubelet[3042]: I0312 02:56:41.657012 3042 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 12 02:56:41.665606 systemd[1]: Reloading finished in 232 ms. Mar 12 02:56:41.688497 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 02:56:41.707557 systemd[1]: kubelet.service: Deactivated successfully. Mar 12 02:56:41.708852 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 02:56:41.708896 systemd[1]: kubelet.service: Consumed 414ms CPU time, 121.8M memory peak. Mar 12 02:56:41.710770 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 02:56:41.803628 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 02:56:41.815515 (kubelet)[3437]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 12 02:56:41.841994 kubelet[3437]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 02:56:41.847723 kubelet[3437]: I0312 02:56:41.847360 3437 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 12 02:56:41.847723 kubelet[3437]: I0312 02:56:41.847392 3437 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 02:56:41.847723 kubelet[3437]: I0312 02:56:41.847410 3437 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 12 02:56:41.847723 kubelet[3437]: I0312 02:56:41.847413 3437 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 12 02:56:41.848067 kubelet[3437]: I0312 02:56:41.848055 3437 server.go:951] "Client rotation is on, will bootstrap in background" Mar 12 02:56:41.849108 kubelet[3437]: I0312 02:56:41.849091 3437 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 12 02:56:41.850657 kubelet[3437]: I0312 02:56:41.850546 3437 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 02:56:41.853286 kubelet[3437]: I0312 02:56:41.853264 3437 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 12 02:56:41.855539 kubelet[3437]: I0312 02:56:41.855522 3437 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 12 02:56:41.855689 kubelet[3437]: I0312 02:56:41.855666 3437 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 02:56:41.855783 kubelet[3437]: I0312 02:56:41.855687 3437 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.4-n-4fd21a1aad","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 02:56:41.855862 kubelet[3437]: I0312 02:56:41.855783 3437 topology_manager.go:143] "Creating topology manager with none policy" Mar 12 02:56:41.855862 kubelet[3437]: I0312 02:56:41.855792 3437 container_manager_linux.go:308] "Creating device plugin manager" Mar 12 02:56:41.855862 kubelet[3437]: I0312 02:56:41.855823 3437 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 12 02:56:41.855959 kubelet[3437]: I0312 02:56:41.855942 3437 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 12 02:56:41.856063 kubelet[3437]: I0312 02:56:41.856049 3437 kubelet.go:482] "Attempting to sync node with API server" Mar 12 02:56:41.856083 kubelet[3437]: I0312 02:56:41.856064 3437 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 02:56:41.856083 kubelet[3437]: I0312 02:56:41.856076 3437 kubelet.go:394] "Adding apiserver pod source" Mar 12 02:56:41.856083 kubelet[3437]: I0312 02:56:41.856083 3437 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 02:56:41.858986 kubelet[3437]: I0312 02:56:41.858727 3437 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 12 02:56:41.859343 kubelet[3437]: I0312 02:56:41.859325 3437 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 12 02:56:41.859419 kubelet[3437]: I0312 02:56:41.859350 3437 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 12 02:56:41.860939 kubelet[3437]: I0312 02:56:41.860821 3437 server.go:1257] "Started kubelet" Mar 12 02:56:41.865100 kubelet[3437]: I0312 02:56:41.864504 3437 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 12 02:56:41.872006 kubelet[3437]: I0312 02:56:41.871986 3437 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 12 02:56:41.872141 kubelet[3437]: I0312 02:56:41.864878 3437 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 02:56:41.872716 kubelet[3437]: I0312 02:56:41.872700 3437 server.go:317] "Adding debug handlers to kubelet server" Mar 12 02:56:41.873308 kubelet[3437]: I0312 02:56:41.873292 3437 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 12 02:56:41.873494 kubelet[3437]: E0312 02:56:41.873477 3437 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4459.2.4-n-4fd21a1aad\" not found" Mar 12 02:56:41.873668 kubelet[3437]: I0312 02:56:41.864920 3437 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 02:56:41.873749 kubelet[3437]: I0312 02:56:41.873737 3437 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 12 02:56:41.874114 kubelet[3437]: I0312 02:56:41.873937 3437 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 02:56:41.876217 kubelet[3437]: I0312 02:56:41.876202 3437 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 12 02:56:41.876372 kubelet[3437]: I0312 02:56:41.876361 3437 reconciler.go:29] "Reconciler: start to sync state" Mar 12 02:56:41.877863 kubelet[3437]: I0312 02:56:41.877836 3437 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 12 02:56:41.878717 kubelet[3437]: I0312 02:56:41.878702 3437 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 12 02:56:41.878784 kubelet[3437]: I0312 02:56:41.878775 3437 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 12 02:56:41.878849 kubelet[3437]: I0312 02:56:41.878841 3437 kubelet.go:2501] "Starting kubelet main sync loop" Mar 12 02:56:41.878932 kubelet[3437]: E0312 02:56:41.878919 3437 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 02:56:41.888821 kubelet[3437]: I0312 02:56:41.888793 3437 factory.go:223] Registration of the containerd container factory successfully Mar 12 02:56:41.888915 kubelet[3437]: I0312 02:56:41.888906 3437 factory.go:223] Registration of the systemd container factory successfully Mar 12 02:56:41.889025 kubelet[3437]: I0312 02:56:41.889009 3437 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 12 02:56:41.923700 kubelet[3437]: I0312 02:56:41.923676 3437 cpu_manager.go:225] "Starting" policy="none" Mar 12 02:56:41.923700 kubelet[3437]: I0312 02:56:41.923691 3437 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 12 02:56:41.923700 kubelet[3437]: I0312 02:56:41.923707 3437 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 12 02:56:41.923850 kubelet[3437]: I0312 02:56:41.923792 3437 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 12 02:56:41.923850 kubelet[3437]: I0312 02:56:41.923799 3437 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 12 02:56:41.923850 kubelet[3437]: I0312 02:56:41.923828 3437 policy_none.go:50] "Start" Mar 12 02:56:41.923850 kubelet[3437]: I0312 02:56:41.923834 3437 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 12 02:56:41.923850 kubelet[3437]: I0312 02:56:41.923841 3437 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 12 02:56:41.923923 kubelet[3437]: I0312 02:56:41.923904 3437 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 12 02:56:41.923923 kubelet[3437]: I0312 02:56:41.923911 3437 policy_none.go:44] "Start" Mar 12 02:56:41.928734 kubelet[3437]: E0312 02:56:41.928623 3437 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 12 02:56:41.928803 kubelet[3437]: I0312 02:56:41.928740 3437 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 12 02:56:41.928803 kubelet[3437]: I0312 02:56:41.928748 3437 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 02:56:41.929050 kubelet[3437]: I0312 02:56:41.929010 3437 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 12 02:56:41.929980 kubelet[3437]: E0312 02:56:41.929939 3437 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 12 02:56:41.979872 kubelet[3437]: I0312 02:56:41.979791 3437 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:41.980496 kubelet[3437]: I0312 02:56:41.980431 3437 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:41.980807 kubelet[3437]: I0312 02:56:41.980794 3437 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:41.987993 kubelet[3437]: I0312 02:56:41.987977 3437 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 12 02:56:41.993730 kubelet[3437]: I0312 02:56:41.993603 3437 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 12 02:56:41.993880 kubelet[3437]: I0312 02:56:41.993832 3437 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 12 02:56:41.994032 kubelet[3437]: E0312 02:56:41.994004 3437 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.4-n-4fd21a1aad\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:42.031047 kubelet[3437]: I0312 02:56:42.030783 3437 kubelet_node_status.go:74] "Attempting to register node" node="ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:42.052778 kubelet[3437]: I0312 02:56:42.052745 3437 kubelet_node_status.go:123] "Node was previously registered" node="ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:42.052886 kubelet[3437]: I0312 02:56:42.052834 3437 kubelet_node_status.go:77] "Successfully registered node" node="ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:42.078266 kubelet[3437]: I0312 02:56:42.078104 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0129ffccab21975a381ab4724dfbb1e6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.4-n-4fd21a1aad\" (UID: \"0129ffccab21975a381ab4724dfbb1e6\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:42.078266 kubelet[3437]: I0312 02:56:42.078130 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e99a172f8bdb41b67e0da45f56098e0d-ca-certs\") pod \"kube-controller-manager-ci-4459.2.4-n-4fd21a1aad\" (UID: \"e99a172f8bdb41b67e0da45f56098e0d\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:42.078266 kubelet[3437]: I0312 02:56:42.078149 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e99a172f8bdb41b67e0da45f56098e0d-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.4-n-4fd21a1aad\" (UID: \"e99a172f8bdb41b67e0da45f56098e0d\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:42.078266 kubelet[3437]: I0312 02:56:42.078164 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e99a172f8bdb41b67e0da45f56098e0d-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.4-n-4fd21a1aad\" (UID: \"e99a172f8bdb41b67e0da45f56098e0d\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:42.078266 kubelet[3437]: I0312 02:56:42.078174 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e99a172f8bdb41b67e0da45f56098e0d-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.4-n-4fd21a1aad\" (UID: \"e99a172f8bdb41b67e0da45f56098e0d\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:42.078422 kubelet[3437]: I0312 02:56:42.078187 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e99a172f8bdb41b67e0da45f56098e0d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.4-n-4fd21a1aad\" (UID: \"e99a172f8bdb41b67e0da45f56098e0d\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:42.078422 kubelet[3437]: I0312 02:56:42.078197 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9aede371496d52f5ede7bc29e391eaf9-kubeconfig\") pod \"kube-scheduler-ci-4459.2.4-n-4fd21a1aad\" (UID: \"9aede371496d52f5ede7bc29e391eaf9\") " pod="kube-system/kube-scheduler-ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:42.078422 kubelet[3437]: I0312 02:56:42.078206 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0129ffccab21975a381ab4724dfbb1e6-ca-certs\") pod \"kube-apiserver-ci-4459.2.4-n-4fd21a1aad\" (UID: \"0129ffccab21975a381ab4724dfbb1e6\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:42.078422 kubelet[3437]: I0312 02:56:42.078218 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0129ffccab21975a381ab4724dfbb1e6-k8s-certs\") pod \"kube-apiserver-ci-4459.2.4-n-4fd21a1aad\" (UID: \"0129ffccab21975a381ab4724dfbb1e6\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-4fd21a1aad" Mar 12 02:56:42.522093 sudo[3475]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 12 02:56:42.522302 sudo[3475]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 12 02:56:42.769226 sudo[3475]: pam_unix(sudo:session): session closed for user root Mar 12 02:56:42.858086 kubelet[3437]: I0312 02:56:42.858035 3437 apiserver.go:52] "Watching apiserver" Mar 12 02:56:42.877008 kubelet[3437]: I0312 02:56:42.876986 3437 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 12 02:56:42.929634 kubelet[3437]: I0312 02:56:42.929594 3437 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.4-n-4fd21a1aad" podStartSLOduration=1.929576789 podStartE2EDuration="1.929576789s" podCreationTimestamp="2026-03-12 02:56:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 02:56:42.928805468 +0000 UTC m=+1.110621696" watchObservedRunningTime="2026-03-12 02:56:42.929576789 +0000 UTC m=+1.111393009" Mar 12 02:56:42.939350 kubelet[3437]: I0312 02:56:42.939320 3437 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.4-n-4fd21a1aad" podStartSLOduration=1.939314325 podStartE2EDuration="1.939314325s" podCreationTimestamp="2026-03-12 02:56:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 02:56:42.939213913 +0000 UTC m=+1.121030133" watchObservedRunningTime="2026-03-12 02:56:42.939314325 +0000 UTC m=+1.121130553" Mar 12 02:56:42.967440 kubelet[3437]: I0312 02:56:42.967405 3437 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.4-n-4fd21a1aad" podStartSLOduration=1.967399712 podStartE2EDuration="1.967399712s" podCreationTimestamp="2026-03-12 02:56:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 02:56:42.950454961 +0000 UTC m=+1.132271181" watchObservedRunningTime="2026-03-12 02:56:42.967399712 +0000 UTC m=+1.149215940" Mar 12 02:56:43.841177 sudo[2389]: pam_unix(sudo:session): session closed for user root Mar 12 02:56:43.918836 sshd[2388]: Connection closed by 10.200.16.10 port 60266 Mar 12 02:56:43.920203 sshd-session[2385]: pam_unix(sshd:session): session closed for user core Mar 12 02:56:43.923135 systemd-logind[1890]: Session 9 logged out. Waiting for processes to exit. Mar 12 02:56:43.923633 systemd[1]: sshd@6-10.200.20.34:22-10.200.16.10:60266.service: Deactivated successfully. Mar 12 02:56:43.927509 systemd[1]: session-9.scope: Deactivated successfully. Mar 12 02:56:43.927864 systemd[1]: session-9.scope: Consumed 2.064s CPU time, 258.3M memory peak. Mar 12 02:56:43.929570 systemd-logind[1890]: Removed session 9. Mar 12 02:56:47.595339 kubelet[3437]: I0312 02:56:47.595309 3437 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 12 02:56:47.596128 containerd[1908]: time="2026-03-12T02:56:47.596096230Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 12 02:56:47.596590 kubelet[3437]: I0312 02:56:47.596304 3437 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 12 02:56:48.606415 systemd[1]: Created slice kubepods-besteffort-pod6a9e3131_202d_4e4d_8cd1_0fdf665dc7b7.slice - libcontainer container kubepods-besteffort-pod6a9e3131_202d_4e4d_8cd1_0fdf665dc7b7.slice. Mar 12 02:56:48.613264 kubelet[3437]: I0312 02:56:48.613181 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6a9e3131-202d-4e4d-8cd1-0fdf665dc7b7-kube-proxy\") pod \"kube-proxy-mzqxw\" (UID: \"6a9e3131-202d-4e4d-8cd1-0fdf665dc7b7\") " pod="kube-system/kube-proxy-mzqxw" Mar 12 02:56:48.613264 kubelet[3437]: I0312 02:56:48.613215 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9cvc\" (UniqueName: \"kubernetes.io/projected/6a9e3131-202d-4e4d-8cd1-0fdf665dc7b7-kube-api-access-s9cvc\") pod \"kube-proxy-mzqxw\" (UID: \"6a9e3131-202d-4e4d-8cd1-0fdf665dc7b7\") " pod="kube-system/kube-proxy-mzqxw" Mar 12 02:56:48.613264 kubelet[3437]: I0312 02:56:48.613256 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a9e3131-202d-4e4d-8cd1-0fdf665dc7b7-xtables-lock\") pod \"kube-proxy-mzqxw\" (UID: \"6a9e3131-202d-4e4d-8cd1-0fdf665dc7b7\") " pod="kube-system/kube-proxy-mzqxw" Mar 12 02:56:48.613264 kubelet[3437]: I0312 02:56:48.613269 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a9e3131-202d-4e4d-8cd1-0fdf665dc7b7-lib-modules\") pod \"kube-proxy-mzqxw\" (UID: \"6a9e3131-202d-4e4d-8cd1-0fdf665dc7b7\") " pod="kube-system/kube-proxy-mzqxw" Mar 12 02:56:48.627543 systemd[1]: Created slice kubepods-burstable-pod9575640c_f5fc_4eca_9b78_a781b5903216.slice - libcontainer container kubepods-burstable-pod9575640c_f5fc_4eca_9b78_a781b5903216.slice. Mar 12 02:56:48.714249 kubelet[3437]: I0312 02:56:48.714158 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-xtables-lock\") pod \"cilium-f72rm\" (UID: \"9575640c-f5fc-4eca-9b78-a781b5903216\") " pod="kube-system/cilium-f72rm" Mar 12 02:56:48.714249 kubelet[3437]: I0312 02:56:48.714187 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-host-proc-sys-kernel\") pod \"cilium-f72rm\" (UID: \"9575640c-f5fc-4eca-9b78-a781b5903216\") " pod="kube-system/cilium-f72rm" Mar 12 02:56:48.714249 kubelet[3437]: I0312 02:56:48.714198 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-bpf-maps\") pod \"cilium-f72rm\" (UID: \"9575640c-f5fc-4eca-9b78-a781b5903216\") " pod="kube-system/cilium-f72rm" Mar 12 02:56:48.714249 kubelet[3437]: I0312 02:56:48.714207 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9575640c-f5fc-4eca-9b78-a781b5903216-clustermesh-secrets\") pod \"cilium-f72rm\" (UID: \"9575640c-f5fc-4eca-9b78-a781b5903216\") " pod="kube-system/cilium-f72rm" Mar 12 02:56:48.714249 kubelet[3437]: I0312 02:56:48.714218 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9575640c-f5fc-4eca-9b78-a781b5903216-cilium-config-path\") pod \"cilium-f72rm\" (UID: \"9575640c-f5fc-4eca-9b78-a781b5903216\") " pod="kube-system/cilium-f72rm" Mar 12 02:56:48.714603 kubelet[3437]: I0312 02:56:48.714467 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-host-proc-sys-net\") pod \"cilium-f72rm\" (UID: \"9575640c-f5fc-4eca-9b78-a781b5903216\") " pod="kube-system/cilium-f72rm" Mar 12 02:56:48.714603 kubelet[3437]: I0312 02:56:48.714486 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9575640c-f5fc-4eca-9b78-a781b5903216-hubble-tls\") pod \"cilium-f72rm\" (UID: \"9575640c-f5fc-4eca-9b78-a781b5903216\") " pod="kube-system/cilium-f72rm" Mar 12 02:56:48.714603 kubelet[3437]: I0312 02:56:48.714549 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-cni-path\") pod \"cilium-f72rm\" (UID: \"9575640c-f5fc-4eca-9b78-a781b5903216\") " pod="kube-system/cilium-f72rm" Mar 12 02:56:48.714603 kubelet[3437]: I0312 02:56:48.714560 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-etc-cni-netd\") pod \"cilium-f72rm\" (UID: \"9575640c-f5fc-4eca-9b78-a781b5903216\") " pod="kube-system/cilium-f72rm" Mar 12 02:56:48.714728 kubelet[3437]: I0312 02:56:48.714712 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-cilium-run\") pod \"cilium-f72rm\" (UID: \"9575640c-f5fc-4eca-9b78-a781b5903216\") " pod="kube-system/cilium-f72rm" Mar 12 02:56:48.714900 kubelet[3437]: I0312 02:56:48.714860 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-hostproc\") pod \"cilium-f72rm\" (UID: \"9575640c-f5fc-4eca-9b78-a781b5903216\") " pod="kube-system/cilium-f72rm" Mar 12 02:56:48.714946 kubelet[3437]: I0312 02:56:48.714902 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-cilium-cgroup\") pod \"cilium-f72rm\" (UID: \"9575640c-f5fc-4eca-9b78-a781b5903216\") " pod="kube-system/cilium-f72rm" Mar 12 02:56:48.714946 kubelet[3437]: I0312 02:56:48.714917 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-lib-modules\") pod \"cilium-f72rm\" (UID: \"9575640c-f5fc-4eca-9b78-a781b5903216\") " pod="kube-system/cilium-f72rm" Mar 12 02:56:48.714946 kubelet[3437]: I0312 02:56:48.714937 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trnrw\" (UniqueName: \"kubernetes.io/projected/9575640c-f5fc-4eca-9b78-a781b5903216-kube-api-access-trnrw\") pod \"cilium-f72rm\" (UID: \"9575640c-f5fc-4eca-9b78-a781b5903216\") " pod="kube-system/cilium-f72rm" Mar 12 02:56:48.789701 systemd[1]: Created slice kubepods-besteffort-pod46c28e00_9663_4822_8099_c81f5d7ff3ae.slice - libcontainer container kubepods-besteffort-pod46c28e00_9663_4822_8099_c81f5d7ff3ae.slice. Mar 12 02:56:48.816052 kubelet[3437]: I0312 02:56:48.815343 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/46c28e00-9663-4822-8099-c81f5d7ff3ae-cilium-config-path\") pod \"cilium-operator-78cf5644cb-bt7sm\" (UID: \"46c28e00-9663-4822-8099-c81f5d7ff3ae\") " pod="kube-system/cilium-operator-78cf5644cb-bt7sm" Mar 12 02:56:48.816290 kubelet[3437]: I0312 02:56:48.816189 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hpzd\" (UniqueName: \"kubernetes.io/projected/46c28e00-9663-4822-8099-c81f5d7ff3ae-kube-api-access-4hpzd\") pod \"cilium-operator-78cf5644cb-bt7sm\" (UID: \"46c28e00-9663-4822-8099-c81f5d7ff3ae\") " pod="kube-system/cilium-operator-78cf5644cb-bt7sm" Mar 12 02:56:48.928646 containerd[1908]: time="2026-03-12T02:56:48.928572961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mzqxw,Uid:6a9e3131-202d-4e4d-8cd1-0fdf665dc7b7,Namespace:kube-system,Attempt:0,}" Mar 12 02:56:48.939514 containerd[1908]: time="2026-03-12T02:56:48.939489427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f72rm,Uid:9575640c-f5fc-4eca-9b78-a781b5903216,Namespace:kube-system,Attempt:0,}" Mar 12 02:56:48.990711 containerd[1908]: time="2026-03-12T02:56:48.990639509Z" level=info msg="connecting to shim 59d508515650bf528d0efb9c981c6740c8cafaa75234adba1cc77452a3bc59ea" address="unix:///run/containerd/s/003322ff3b605dfb09ac5f107eb06748b907e938646d4f429d600f050daafc0e" namespace=k8s.io protocol=ttrpc version=3 Mar 12 02:56:49.007837 containerd[1908]: time="2026-03-12T02:56:49.007660677Z" level=info msg="connecting to shim a635061c22127b7a16d505602f0ddb8a7a83edd16c7e1901112d1502fdc34947" address="unix:///run/containerd/s/c31c99df3a0916fd3a11e2a22756c1e7296540ce4ebe3ada6a412a283af5824f" namespace=k8s.io protocol=ttrpc version=3 Mar 12 02:56:49.008188 systemd[1]: Started cri-containerd-59d508515650bf528d0efb9c981c6740c8cafaa75234adba1cc77452a3bc59ea.scope - libcontainer container 59d508515650bf528d0efb9c981c6740c8cafaa75234adba1cc77452a3bc59ea. Mar 12 02:56:49.025009 systemd[1]: Started cri-containerd-a635061c22127b7a16d505602f0ddb8a7a83edd16c7e1901112d1502fdc34947.scope - libcontainer container a635061c22127b7a16d505602f0ddb8a7a83edd16c7e1901112d1502fdc34947. Mar 12 02:56:49.035991 containerd[1908]: time="2026-03-12T02:56:49.035945667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mzqxw,Uid:6a9e3131-202d-4e4d-8cd1-0fdf665dc7b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"59d508515650bf528d0efb9c981c6740c8cafaa75234adba1cc77452a3bc59ea\"" Mar 12 02:56:49.049798 containerd[1908]: time="2026-03-12T02:56:49.049764099Z" level=info msg="CreateContainer within sandbox \"59d508515650bf528d0efb9c981c6740c8cafaa75234adba1cc77452a3bc59ea\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 12 02:56:49.064960 containerd[1908]: time="2026-03-12T02:56:49.064903492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f72rm,Uid:9575640c-f5fc-4eca-9b78-a781b5903216,Namespace:kube-system,Attempt:0,} returns sandbox id \"a635061c22127b7a16d505602f0ddb8a7a83edd16c7e1901112d1502fdc34947\"" Mar 12 02:56:49.066217 containerd[1908]: time="2026-03-12T02:56:49.066167491Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 12 02:56:49.073880 containerd[1908]: time="2026-03-12T02:56:49.073855284Z" level=info msg="Container 14d1f1a0cf5a25d430e8a0914eccc36d6f61d2c058567fce9b5cb9fda0fa0104: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:56:49.094376 containerd[1908]: time="2026-03-12T02:56:49.094345702Z" level=info msg="CreateContainer within sandbox \"59d508515650bf528d0efb9c981c6740c8cafaa75234adba1cc77452a3bc59ea\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"14d1f1a0cf5a25d430e8a0914eccc36d6f61d2c058567fce9b5cb9fda0fa0104\"" Mar 12 02:56:49.094876 containerd[1908]: time="2026-03-12T02:56:49.094792215Z" level=info msg="StartContainer for \"14d1f1a0cf5a25d430e8a0914eccc36d6f61d2c058567fce9b5cb9fda0fa0104\"" Mar 12 02:56:49.096095 containerd[1908]: time="2026-03-12T02:56:49.096076223Z" level=info msg="connecting to shim 14d1f1a0cf5a25d430e8a0914eccc36d6f61d2c058567fce9b5cb9fda0fa0104" address="unix:///run/containerd/s/003322ff3b605dfb09ac5f107eb06748b907e938646d4f429d600f050daafc0e" protocol=ttrpc version=3 Mar 12 02:56:49.097196 containerd[1908]: time="2026-03-12T02:56:49.096887605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-bt7sm,Uid:46c28e00-9663-4822-8099-c81f5d7ff3ae,Namespace:kube-system,Attempt:0,}" Mar 12 02:56:49.108931 systemd[1]: Started cri-containerd-14d1f1a0cf5a25d430e8a0914eccc36d6f61d2c058567fce9b5cb9fda0fa0104.scope - libcontainer container 14d1f1a0cf5a25d430e8a0914eccc36d6f61d2c058567fce9b5cb9fda0fa0104. Mar 12 02:56:49.143472 containerd[1908]: time="2026-03-12T02:56:49.143336727Z" level=info msg="connecting to shim ae95e49fc60b5b0a1e9200afb49e6772e306ba597db719443a10675d337fd308" address="unix:///run/containerd/s/3a84c8df06e46d770c3783aa12dec423c13cfe11696ddb5e0ce1f2c716afa630" namespace=k8s.io protocol=ttrpc version=3 Mar 12 02:56:49.159944 systemd[1]: Started cri-containerd-ae95e49fc60b5b0a1e9200afb49e6772e306ba597db719443a10675d337fd308.scope - libcontainer container ae95e49fc60b5b0a1e9200afb49e6772e306ba597db719443a10675d337fd308. Mar 12 02:56:49.170118 containerd[1908]: time="2026-03-12T02:56:49.170089540Z" level=info msg="StartContainer for \"14d1f1a0cf5a25d430e8a0914eccc36d6f61d2c058567fce9b5cb9fda0fa0104\" returns successfully" Mar 12 02:56:49.203771 containerd[1908]: time="2026-03-12T02:56:49.203585759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-bt7sm,Uid:46c28e00-9663-4822-8099-c81f5d7ff3ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae95e49fc60b5b0a1e9200afb49e6772e306ba597db719443a10675d337fd308\"" Mar 12 02:56:50.554738 kubelet[3437]: I0312 02:56:50.554565 3437 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-mzqxw" podStartSLOduration=2.554555561 podStartE2EDuration="2.554555561s" podCreationTimestamp="2026-03-12 02:56:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 02:56:49.938308933 +0000 UTC m=+8.120125153" watchObservedRunningTime="2026-03-12 02:56:50.554555561 +0000 UTC m=+8.736371781" Mar 12 02:57:00.922972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount354484976.mount: Deactivated successfully. Mar 12 02:57:02.276430 containerd[1908]: time="2026-03-12T02:57:02.276388077Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:57:02.280906 containerd[1908]: time="2026-03-12T02:57:02.280870426Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 12 02:57:02.284126 containerd[1908]: time="2026-03-12T02:57:02.284091413Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:57:02.285552 containerd[1908]: time="2026-03-12T02:57:02.285477811Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 13.218636367s" Mar 12 02:57:02.285552 containerd[1908]: time="2026-03-12T02:57:02.285504164Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 12 02:57:02.287208 containerd[1908]: time="2026-03-12T02:57:02.287046959Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 12 02:57:02.295467 containerd[1908]: time="2026-03-12T02:57:02.295440370Z" level=info msg="CreateContainer within sandbox \"a635061c22127b7a16d505602f0ddb8a7a83edd16c7e1901112d1502fdc34947\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 12 02:57:02.355921 containerd[1908]: time="2026-03-12T02:57:02.355868975Z" level=info msg="Container 0499becf3de0364c9115dea8a9f12bcb06041ac7b9ac488d3fd25ad5a2b7a607: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:57:03.191608 containerd[1908]: time="2026-03-12T02:57:03.191512982Z" level=info msg="CreateContainer within sandbox \"a635061c22127b7a16d505602f0ddb8a7a83edd16c7e1901112d1502fdc34947\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0499becf3de0364c9115dea8a9f12bcb06041ac7b9ac488d3fd25ad5a2b7a607\"" Mar 12 02:57:03.191872 containerd[1908]: time="2026-03-12T02:57:03.191854611Z" level=info msg="StartContainer for \"0499becf3de0364c9115dea8a9f12bcb06041ac7b9ac488d3fd25ad5a2b7a607\"" Mar 12 02:57:03.193562 containerd[1908]: time="2026-03-12T02:57:03.193492282Z" level=info msg="connecting to shim 0499becf3de0364c9115dea8a9f12bcb06041ac7b9ac488d3fd25ad5a2b7a607" address="unix:///run/containerd/s/c31c99df3a0916fd3a11e2a22756c1e7296540ce4ebe3ada6a412a283af5824f" protocol=ttrpc version=3 Mar 12 02:57:03.211923 systemd[1]: Started cri-containerd-0499becf3de0364c9115dea8a9f12bcb06041ac7b9ac488d3fd25ad5a2b7a607.scope - libcontainer container 0499becf3de0364c9115dea8a9f12bcb06041ac7b9ac488d3fd25ad5a2b7a607. Mar 12 02:57:03.239276 containerd[1908]: time="2026-03-12T02:57:03.239248027Z" level=info msg="StartContainer for \"0499becf3de0364c9115dea8a9f12bcb06041ac7b9ac488d3fd25ad5a2b7a607\" returns successfully" Mar 12 02:57:03.241211 systemd[1]: cri-containerd-0499becf3de0364c9115dea8a9f12bcb06041ac7b9ac488d3fd25ad5a2b7a607.scope: Deactivated successfully. Mar 12 02:57:03.244905 containerd[1908]: time="2026-03-12T02:57:03.244852859Z" level=info msg="received container exit event container_id:\"0499becf3de0364c9115dea8a9f12bcb06041ac7b9ac488d3fd25ad5a2b7a607\" id:\"0499becf3de0364c9115dea8a9f12bcb06041ac7b9ac488d3fd25ad5a2b7a607\" pid:3859 exited_at:{seconds:1773284223 nanos:244541615}" Mar 12 02:57:03.260026 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0499becf3de0364c9115dea8a9f12bcb06041ac7b9ac488d3fd25ad5a2b7a607-rootfs.mount: Deactivated successfully. Mar 12 02:57:04.934720 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1777866277.mount: Deactivated successfully. Mar 12 02:57:04.967593 containerd[1908]: time="2026-03-12T02:57:04.967557395Z" level=info msg="CreateContainer within sandbox \"a635061c22127b7a16d505602f0ddb8a7a83edd16c7e1901112d1502fdc34947\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 12 02:57:05.006551 containerd[1908]: time="2026-03-12T02:57:05.005675958Z" level=info msg="Container 942eb4b017db4514e6adaf90300d8f6e6053d33331d5b6c400b2624e48d3cd70: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:57:05.030172 containerd[1908]: time="2026-03-12T02:57:05.030139755Z" level=info msg="CreateContainer within sandbox \"a635061c22127b7a16d505602f0ddb8a7a83edd16c7e1901112d1502fdc34947\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"942eb4b017db4514e6adaf90300d8f6e6053d33331d5b6c400b2624e48d3cd70\"" Mar 12 02:57:05.031002 containerd[1908]: time="2026-03-12T02:57:05.030981716Z" level=info msg="StartContainer for \"942eb4b017db4514e6adaf90300d8f6e6053d33331d5b6c400b2624e48d3cd70\"" Mar 12 02:57:05.031987 containerd[1908]: time="2026-03-12T02:57:05.031940417Z" level=info msg="connecting to shim 942eb4b017db4514e6adaf90300d8f6e6053d33331d5b6c400b2624e48d3cd70" address="unix:///run/containerd/s/c31c99df3a0916fd3a11e2a22756c1e7296540ce4ebe3ada6a412a283af5824f" protocol=ttrpc version=3 Mar 12 02:57:05.051931 systemd[1]: Started cri-containerd-942eb4b017db4514e6adaf90300d8f6e6053d33331d5b6c400b2624e48d3cd70.scope - libcontainer container 942eb4b017db4514e6adaf90300d8f6e6053d33331d5b6c400b2624e48d3cd70. Mar 12 02:57:05.083475 containerd[1908]: time="2026-03-12T02:57:05.083404941Z" level=info msg="StartContainer for \"942eb4b017db4514e6adaf90300d8f6e6053d33331d5b6c400b2624e48d3cd70\" returns successfully" Mar 12 02:57:05.094087 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 12 02:57:05.094263 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 12 02:57:05.094736 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 12 02:57:05.097395 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 02:57:05.100054 systemd[1]: cri-containerd-942eb4b017db4514e6adaf90300d8f6e6053d33331d5b6c400b2624e48d3cd70.scope: Deactivated successfully. Mar 12 02:57:05.100754 containerd[1908]: time="2026-03-12T02:57:05.100729351Z" level=info msg="received container exit event container_id:\"942eb4b017db4514e6adaf90300d8f6e6053d33331d5b6c400b2624e48d3cd70\" id:\"942eb4b017db4514e6adaf90300d8f6e6053d33331d5b6c400b2624e48d3cd70\" pid:3905 exited_at:{seconds:1773284225 nanos:100513895}" Mar 12 02:57:05.111833 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 02:57:05.873306 containerd[1908]: time="2026-03-12T02:57:05.872856243Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:57:05.877061 containerd[1908]: time="2026-03-12T02:57:05.877039227Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 12 02:57:05.880428 containerd[1908]: time="2026-03-12T02:57:05.880404885Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 02:57:05.881523 containerd[1908]: time="2026-03-12T02:57:05.881501775Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.594430951s" Mar 12 02:57:05.881760 containerd[1908]: time="2026-03-12T02:57:05.881731656Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 12 02:57:05.889839 containerd[1908]: time="2026-03-12T02:57:05.889778510Z" level=info msg="CreateContainer within sandbox \"ae95e49fc60b5b0a1e9200afb49e6772e306ba597db719443a10675d337fd308\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 12 02:57:05.908861 containerd[1908]: time="2026-03-12T02:57:05.908803554Z" level=info msg="Container 4593cc87149965b8a81d2c270452793257ee3a5e368643185ecb8625323da4e8: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:57:05.928334 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-942eb4b017db4514e6adaf90300d8f6e6053d33331d5b6c400b2624e48d3cd70-rootfs.mount: Deactivated successfully. Mar 12 02:57:05.930481 containerd[1908]: time="2026-03-12T02:57:05.930253539Z" level=info msg="CreateContainer within sandbox \"ae95e49fc60b5b0a1e9200afb49e6772e306ba597db719443a10675d337fd308\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4593cc87149965b8a81d2c270452793257ee3a5e368643185ecb8625323da4e8\"" Mar 12 02:57:05.931182 containerd[1908]: time="2026-03-12T02:57:05.931162182Z" level=info msg="StartContainer for \"4593cc87149965b8a81d2c270452793257ee3a5e368643185ecb8625323da4e8\"" Mar 12 02:57:05.931942 containerd[1908]: time="2026-03-12T02:57:05.931883161Z" level=info msg="connecting to shim 4593cc87149965b8a81d2c270452793257ee3a5e368643185ecb8625323da4e8" address="unix:///run/containerd/s/3a84c8df06e46d770c3783aa12dec423c13cfe11696ddb5e0ce1f2c716afa630" protocol=ttrpc version=3 Mar 12 02:57:05.948953 systemd[1]: Started cri-containerd-4593cc87149965b8a81d2c270452793257ee3a5e368643185ecb8625323da4e8.scope - libcontainer container 4593cc87149965b8a81d2c270452793257ee3a5e368643185ecb8625323da4e8. Mar 12 02:57:05.974231 containerd[1908]: time="2026-03-12T02:57:05.974200886Z" level=info msg="CreateContainer within sandbox \"a635061c22127b7a16d505602f0ddb8a7a83edd16c7e1901112d1502fdc34947\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 12 02:57:05.989053 containerd[1908]: time="2026-03-12T02:57:05.989018672Z" level=info msg="StartContainer for \"4593cc87149965b8a81d2c270452793257ee3a5e368643185ecb8625323da4e8\" returns successfully" Mar 12 02:57:06.006008 containerd[1908]: time="2026-03-12T02:57:06.005454288Z" level=info msg="Container cc3bfdf97ebaa9bdfc71378f2def5c6a82c05300316d7e22961eed0ebb925996: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:57:06.039011 containerd[1908]: time="2026-03-12T02:57:06.038962505Z" level=info msg="CreateContainer within sandbox \"a635061c22127b7a16d505602f0ddb8a7a83edd16c7e1901112d1502fdc34947\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cc3bfdf97ebaa9bdfc71378f2def5c6a82c05300316d7e22961eed0ebb925996\"" Mar 12 02:57:06.039511 containerd[1908]: time="2026-03-12T02:57:06.039481693Z" level=info msg="StartContainer for \"cc3bfdf97ebaa9bdfc71378f2def5c6a82c05300316d7e22961eed0ebb925996\"" Mar 12 02:57:06.041722 containerd[1908]: time="2026-03-12T02:57:06.041699019Z" level=info msg="connecting to shim cc3bfdf97ebaa9bdfc71378f2def5c6a82c05300316d7e22961eed0ebb925996" address="unix:///run/containerd/s/c31c99df3a0916fd3a11e2a22756c1e7296540ce4ebe3ada6a412a283af5824f" protocol=ttrpc version=3 Mar 12 02:57:06.069015 systemd[1]: Started cri-containerd-cc3bfdf97ebaa9bdfc71378f2def5c6a82c05300316d7e22961eed0ebb925996.scope - libcontainer container cc3bfdf97ebaa9bdfc71378f2def5c6a82c05300316d7e22961eed0ebb925996. Mar 12 02:57:06.148969 containerd[1908]: time="2026-03-12T02:57:06.148798843Z" level=info msg="StartContainer for \"cc3bfdf97ebaa9bdfc71378f2def5c6a82c05300316d7e22961eed0ebb925996\" returns successfully" Mar 12 02:57:06.156771 systemd[1]: cri-containerd-cc3bfdf97ebaa9bdfc71378f2def5c6a82c05300316d7e22961eed0ebb925996.scope: Deactivated successfully. Mar 12 02:57:06.160847 containerd[1908]: time="2026-03-12T02:57:06.160127423Z" level=info msg="received container exit event container_id:\"cc3bfdf97ebaa9bdfc71378f2def5c6a82c05300316d7e22961eed0ebb925996\" id:\"cc3bfdf97ebaa9bdfc71378f2def5c6a82c05300316d7e22961eed0ebb925996\" pid:3997 exited_at:{seconds:1773284226 nanos:159282919}" Mar 12 02:57:06.928084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1889134443.mount: Deactivated successfully. Mar 12 02:57:06.983685 containerd[1908]: time="2026-03-12T02:57:06.983506567Z" level=info msg="CreateContainer within sandbox \"a635061c22127b7a16d505602f0ddb8a7a83edd16c7e1901112d1502fdc34947\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 12 02:57:06.984064 kubelet[3437]: I0312 02:57:06.983699 3437 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-operator-78cf5644cb-bt7sm" podStartSLOduration=2.30723506 podStartE2EDuration="18.983687438s" podCreationTimestamp="2026-03-12 02:56:48 +0000 UTC" firstStartedPulling="2026-03-12 02:56:49.205706142 +0000 UTC m=+7.387522362" lastFinishedPulling="2026-03-12 02:57:05.882158512 +0000 UTC m=+24.063974740" observedRunningTime="2026-03-12 02:57:06.982390956 +0000 UTC m=+25.164207176" watchObservedRunningTime="2026-03-12 02:57:06.983687438 +0000 UTC m=+25.165503666" Mar 12 02:57:07.008543 containerd[1908]: time="2026-03-12T02:57:07.008131282Z" level=info msg="Container c398764667396af820ed3939c7d1768f23765c996c64aff0c903d9255658032d: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:57:07.009547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount290078528.mount: Deactivated successfully. Mar 12 02:57:07.027704 containerd[1908]: time="2026-03-12T02:57:07.027670306Z" level=info msg="CreateContainer within sandbox \"a635061c22127b7a16d505602f0ddb8a7a83edd16c7e1901112d1502fdc34947\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c398764667396af820ed3939c7d1768f23765c996c64aff0c903d9255658032d\"" Mar 12 02:57:07.028208 containerd[1908]: time="2026-03-12T02:57:07.028187374Z" level=info msg="StartContainer for \"c398764667396af820ed3939c7d1768f23765c996c64aff0c903d9255658032d\"" Mar 12 02:57:07.028801 containerd[1908]: time="2026-03-12T02:57:07.028778445Z" level=info msg="connecting to shim c398764667396af820ed3939c7d1768f23765c996c64aff0c903d9255658032d" address="unix:///run/containerd/s/c31c99df3a0916fd3a11e2a22756c1e7296540ce4ebe3ada6a412a283af5824f" protocol=ttrpc version=3 Mar 12 02:57:07.055943 systemd[1]: Started cri-containerd-c398764667396af820ed3939c7d1768f23765c996c64aff0c903d9255658032d.scope - libcontainer container c398764667396af820ed3939c7d1768f23765c996c64aff0c903d9255658032d. Mar 12 02:57:07.075130 systemd[1]: cri-containerd-c398764667396af820ed3939c7d1768f23765c996c64aff0c903d9255658032d.scope: Deactivated successfully. Mar 12 02:57:07.085587 containerd[1908]: time="2026-03-12T02:57:07.085522924Z" level=info msg="received container exit event container_id:\"c398764667396af820ed3939c7d1768f23765c996c64aff0c903d9255658032d\" id:\"c398764667396af820ed3939c7d1768f23765c996c64aff0c903d9255658032d\" pid:4037 exited_at:{seconds:1773284227 nanos:76198821}" Mar 12 02:57:07.091164 containerd[1908]: time="2026-03-12T02:57:07.091134348Z" level=info msg="StartContainer for \"c398764667396af820ed3939c7d1768f23765c996c64aff0c903d9255658032d\" returns successfully" Mar 12 02:57:07.102220 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c398764667396af820ed3939c7d1768f23765c996c64aff0c903d9255658032d-rootfs.mount: Deactivated successfully. Mar 12 02:57:07.985844 containerd[1908]: time="2026-03-12T02:57:07.985790729Z" level=info msg="CreateContainer within sandbox \"a635061c22127b7a16d505602f0ddb8a7a83edd16c7e1901112d1502fdc34947\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 12 02:57:08.017868 containerd[1908]: time="2026-03-12T02:57:08.017745574Z" level=info msg="Container 869dd5c8562481f3636f7adc47203e5b516056072231cf8fc4c22083d5dc32ab: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:57:08.019162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4274515218.mount: Deactivated successfully. Mar 12 02:57:08.032100 containerd[1908]: time="2026-03-12T02:57:08.032066181Z" level=info msg="CreateContainer within sandbox \"a635061c22127b7a16d505602f0ddb8a7a83edd16c7e1901112d1502fdc34947\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"869dd5c8562481f3636f7adc47203e5b516056072231cf8fc4c22083d5dc32ab\"" Mar 12 02:57:08.032486 containerd[1908]: time="2026-03-12T02:57:08.032458516Z" level=info msg="StartContainer for \"869dd5c8562481f3636f7adc47203e5b516056072231cf8fc4c22083d5dc32ab\"" Mar 12 02:57:08.033690 containerd[1908]: time="2026-03-12T02:57:08.033656114Z" level=info msg="connecting to shim 869dd5c8562481f3636f7adc47203e5b516056072231cf8fc4c22083d5dc32ab" address="unix:///run/containerd/s/c31c99df3a0916fd3a11e2a22756c1e7296540ce4ebe3ada6a412a283af5824f" protocol=ttrpc version=3 Mar 12 02:57:08.055936 systemd[1]: Started cri-containerd-869dd5c8562481f3636f7adc47203e5b516056072231cf8fc4c22083d5dc32ab.scope - libcontainer container 869dd5c8562481f3636f7adc47203e5b516056072231cf8fc4c22083d5dc32ab. Mar 12 02:57:08.093370 containerd[1908]: time="2026-03-12T02:57:08.093342147Z" level=info msg="StartContainer for \"869dd5c8562481f3636f7adc47203e5b516056072231cf8fc4c22083d5dc32ab\" returns successfully" Mar 12 02:57:08.228760 kubelet[3437]: I0312 02:57:08.228735 3437 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 12 02:57:08.271677 systemd[1]: Created slice kubepods-burstable-pod200e1e55_50a5_4079_a037_f769f5a6940f.slice - libcontainer container kubepods-burstable-pod200e1e55_50a5_4079_a037_f769f5a6940f.slice. Mar 12 02:57:08.278752 systemd[1]: Created slice kubepods-burstable-pod93f29ba7_7785_42bd_802e_1c487b6cc361.slice - libcontainer container kubepods-burstable-pod93f29ba7_7785_42bd_802e_1c487b6cc361.slice. Mar 12 02:57:08.328743 kubelet[3437]: I0312 02:57:08.328701 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44c47\" (UniqueName: \"kubernetes.io/projected/200e1e55-50a5-4079-a037-f769f5a6940f-kube-api-access-44c47\") pod \"coredns-7d764666f9-mn6qx\" (UID: \"200e1e55-50a5-4079-a037-f769f5a6940f\") " pod="kube-system/coredns-7d764666f9-mn6qx" Mar 12 02:57:08.328743 kubelet[3437]: I0312 02:57:08.328741 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wr9h\" (UniqueName: \"kubernetes.io/projected/93f29ba7-7785-42bd-802e-1c487b6cc361-kube-api-access-9wr9h\") pod \"coredns-7d764666f9-6dnzl\" (UID: \"93f29ba7-7785-42bd-802e-1c487b6cc361\") " pod="kube-system/coredns-7d764666f9-6dnzl" Mar 12 02:57:08.328989 kubelet[3437]: I0312 02:57:08.328761 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/200e1e55-50a5-4079-a037-f769f5a6940f-config-volume\") pod \"coredns-7d764666f9-mn6qx\" (UID: \"200e1e55-50a5-4079-a037-f769f5a6940f\") " pod="kube-system/coredns-7d764666f9-mn6qx" Mar 12 02:57:08.328989 kubelet[3437]: I0312 02:57:08.328773 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93f29ba7-7785-42bd-802e-1c487b6cc361-config-volume\") pod \"coredns-7d764666f9-6dnzl\" (UID: \"93f29ba7-7785-42bd-802e-1c487b6cc361\") " pod="kube-system/coredns-7d764666f9-6dnzl" Mar 12 02:57:08.583150 containerd[1908]: time="2026-03-12T02:57:08.582921193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-mn6qx,Uid:200e1e55-50a5-4079-a037-f769f5a6940f,Namespace:kube-system,Attempt:0,}" Mar 12 02:57:08.592899 containerd[1908]: time="2026-03-12T02:57:08.592676941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-6dnzl,Uid:93f29ba7-7785-42bd-802e-1c487b6cc361,Namespace:kube-system,Attempt:0,}" Mar 12 02:57:09.001014 kubelet[3437]: I0312 02:57:09.000606 3437 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-f72rm" podStartSLOduration=2.088723745 podStartE2EDuration="21.000593276s" podCreationTimestamp="2026-03-12 02:56:48 +0000 UTC" firstStartedPulling="2026-03-12 02:56:49.065949579 +0000 UTC m=+7.247765799" lastFinishedPulling="2026-03-12 02:57:07.97781911 +0000 UTC m=+26.159635330" observedRunningTime="2026-03-12 02:57:08.99906449 +0000 UTC m=+27.180880718" watchObservedRunningTime="2026-03-12 02:57:09.000593276 +0000 UTC m=+27.182409496" Mar 12 02:57:10.109415 systemd-networkd[1498]: cilium_host: Link UP Mar 12 02:57:10.112027 systemd-networkd[1498]: cilium_net: Link UP Mar 12 02:57:10.112181 systemd-networkd[1498]: cilium_net: Gained carrier Mar 12 02:57:10.112265 systemd-networkd[1498]: cilium_host: Gained carrier Mar 12 02:57:10.238622 systemd-networkd[1498]: cilium_vxlan: Link UP Mar 12 02:57:10.238630 systemd-networkd[1498]: cilium_vxlan: Gained carrier Mar 12 02:57:10.357921 systemd-networkd[1498]: cilium_net: Gained IPv6LL Mar 12 02:57:10.470845 kernel: NET: Registered PF_ALG protocol family Mar 12 02:57:10.870033 systemd-networkd[1498]: cilium_host: Gained IPv6LL Mar 12 02:57:11.007642 systemd-networkd[1498]: lxc_health: Link UP Mar 12 02:57:11.018638 systemd-networkd[1498]: lxc_health: Gained carrier Mar 12 02:57:11.118691 systemd-networkd[1498]: lxc44b243ea966f: Link UP Mar 12 02:57:11.130920 kernel: eth0: renamed from tmpca8ed Mar 12 02:57:11.131590 systemd-networkd[1498]: lxc44b243ea966f: Gained carrier Mar 12 02:57:11.149137 systemd-networkd[1498]: lxc1e73f71d6dd0: Link UP Mar 12 02:57:11.149827 kernel: eth0: renamed from tmp86870 Mar 12 02:57:11.151616 systemd-networkd[1498]: lxc1e73f71d6dd0: Gained carrier Mar 12 02:57:11.317994 systemd-networkd[1498]: cilium_vxlan: Gained IPv6LL Mar 12 02:57:12.406033 systemd-networkd[1498]: lxc_health: Gained IPv6LL Mar 12 02:57:12.661977 systemd-networkd[1498]: lxc44b243ea966f: Gained IPv6LL Mar 12 02:57:12.791900 systemd-networkd[1498]: lxc1e73f71d6dd0: Gained IPv6LL Mar 12 02:57:13.677386 containerd[1908]: time="2026-03-12T02:57:13.677307774Z" level=info msg="connecting to shim ca8ed1426eb96c5cb4c319d13037e31d8fe1bc74768de4c8121cd7995bd97e21" address="unix:///run/containerd/s/ab44f6281626f063e97f763b5ffaada04b1223ec00bc08254e935a57fb89248d" namespace=k8s.io protocol=ttrpc version=3 Mar 12 02:57:13.699601 containerd[1908]: time="2026-03-12T02:57:13.699567519Z" level=info msg="connecting to shim 868700b21296fec200a5af4313efa00aed136b6a3e58cb7dbf07b43ebd6cd806" address="unix:///run/containerd/s/d939d25e5a89946840d6993a363eb74ba3757f7f6234f95b94785da32a4b9270" namespace=k8s.io protocol=ttrpc version=3 Mar 12 02:57:13.701976 systemd[1]: Started cri-containerd-ca8ed1426eb96c5cb4c319d13037e31d8fe1bc74768de4c8121cd7995bd97e21.scope - libcontainer container ca8ed1426eb96c5cb4c319d13037e31d8fe1bc74768de4c8121cd7995bd97e21. Mar 12 02:57:13.725109 systemd[1]: Started cri-containerd-868700b21296fec200a5af4313efa00aed136b6a3e58cb7dbf07b43ebd6cd806.scope - libcontainer container 868700b21296fec200a5af4313efa00aed136b6a3e58cb7dbf07b43ebd6cd806. Mar 12 02:57:13.739141 containerd[1908]: time="2026-03-12T02:57:13.739075899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-mn6qx,Uid:200e1e55-50a5-4079-a037-f769f5a6940f,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca8ed1426eb96c5cb4c319d13037e31d8fe1bc74768de4c8121cd7995bd97e21\"" Mar 12 02:57:13.751836 containerd[1908]: time="2026-03-12T02:57:13.751038988Z" level=info msg="CreateContainer within sandbox \"ca8ed1426eb96c5cb4c319d13037e31d8fe1bc74768de4c8121cd7995bd97e21\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 12 02:57:13.767737 containerd[1908]: time="2026-03-12T02:57:13.767704088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-6dnzl,Uid:93f29ba7-7785-42bd-802e-1c487b6cc361,Namespace:kube-system,Attempt:0,} returns sandbox id \"868700b21296fec200a5af4313efa00aed136b6a3e58cb7dbf07b43ebd6cd806\"" Mar 12 02:57:13.778055 containerd[1908]: time="2026-03-12T02:57:13.777970855Z" level=info msg="Container 7276d9999c37fc0ed7de09c88813ea1165ee15aaa3f8324457b9ffc58d5c0802: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:57:13.778163 containerd[1908]: time="2026-03-12T02:57:13.778142878Z" level=info msg="CreateContainer within sandbox \"868700b21296fec200a5af4313efa00aed136b6a3e58cb7dbf07b43ebd6cd806\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 12 02:57:13.798843 containerd[1908]: time="2026-03-12T02:57:13.798789994Z" level=info msg="CreateContainer within sandbox \"ca8ed1426eb96c5cb4c319d13037e31d8fe1bc74768de4c8121cd7995bd97e21\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7276d9999c37fc0ed7de09c88813ea1165ee15aaa3f8324457b9ffc58d5c0802\"" Mar 12 02:57:13.799378 containerd[1908]: time="2026-03-12T02:57:13.799334751Z" level=info msg="StartContainer for \"7276d9999c37fc0ed7de09c88813ea1165ee15aaa3f8324457b9ffc58d5c0802\"" Mar 12 02:57:13.800276 containerd[1908]: time="2026-03-12T02:57:13.800248314Z" level=info msg="connecting to shim 7276d9999c37fc0ed7de09c88813ea1165ee15aaa3f8324457b9ffc58d5c0802" address="unix:///run/containerd/s/ab44f6281626f063e97f763b5ffaada04b1223ec00bc08254e935a57fb89248d" protocol=ttrpc version=3 Mar 12 02:57:13.809876 containerd[1908]: time="2026-03-12T02:57:13.809844720Z" level=info msg="Container 0d3d2ce69da47a7159627df2dd64610c53709011b82392b2dadad9a149b16846: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:57:13.816929 systemd[1]: Started cri-containerd-7276d9999c37fc0ed7de09c88813ea1165ee15aaa3f8324457b9ffc58d5c0802.scope - libcontainer container 7276d9999c37fc0ed7de09c88813ea1165ee15aaa3f8324457b9ffc58d5c0802. Mar 12 02:57:13.824803 containerd[1908]: time="2026-03-12T02:57:13.824767097Z" level=info msg="CreateContainer within sandbox \"868700b21296fec200a5af4313efa00aed136b6a3e58cb7dbf07b43ebd6cd806\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0d3d2ce69da47a7159627df2dd64610c53709011b82392b2dadad9a149b16846\"" Mar 12 02:57:13.825789 containerd[1908]: time="2026-03-12T02:57:13.825764327Z" level=info msg="StartContainer for \"0d3d2ce69da47a7159627df2dd64610c53709011b82392b2dadad9a149b16846\"" Mar 12 02:57:13.826847 containerd[1908]: time="2026-03-12T02:57:13.826356478Z" level=info msg="connecting to shim 0d3d2ce69da47a7159627df2dd64610c53709011b82392b2dadad9a149b16846" address="unix:///run/containerd/s/d939d25e5a89946840d6993a363eb74ba3757f7f6234f95b94785da32a4b9270" protocol=ttrpc version=3 Mar 12 02:57:13.845037 systemd[1]: Started cri-containerd-0d3d2ce69da47a7159627df2dd64610c53709011b82392b2dadad9a149b16846.scope - libcontainer container 0d3d2ce69da47a7159627df2dd64610c53709011b82392b2dadad9a149b16846. Mar 12 02:57:13.856418 containerd[1908]: time="2026-03-12T02:57:13.856275964Z" level=info msg="StartContainer for \"7276d9999c37fc0ed7de09c88813ea1165ee15aaa3f8324457b9ffc58d5c0802\" returns successfully" Mar 12 02:57:13.877489 containerd[1908]: time="2026-03-12T02:57:13.877267365Z" level=info msg="StartContainer for \"0d3d2ce69da47a7159627df2dd64610c53709011b82392b2dadad9a149b16846\" returns successfully" Mar 12 02:57:14.012177 kubelet[3437]: I0312 02:57:14.011946 3437 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-6dnzl" podStartSLOduration=26.01193428 podStartE2EDuration="26.01193428s" podCreationTimestamp="2026-03-12 02:56:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 02:57:14.011206524 +0000 UTC m=+32.193022752" watchObservedRunningTime="2026-03-12 02:57:14.01193428 +0000 UTC m=+32.193750500" Mar 12 02:57:14.043142 kubelet[3437]: I0312 02:57:14.042993 3437 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-mn6qx" podStartSLOduration=26.042983193 podStartE2EDuration="26.042983193s" podCreationTimestamp="2026-03-12 02:56:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 02:57:14.027920386 +0000 UTC m=+32.209736622" watchObservedRunningTime="2026-03-12 02:57:14.042983193 +0000 UTC m=+32.224799413" Mar 12 02:58:11.277115 systemd[1]: Started sshd@7-10.200.20.34:22-10.200.16.10:41134.service - OpenSSH per-connection server daemon (10.200.16.10:41134). Mar 12 02:58:11.692833 sshd[4767]: Accepted publickey for core from 10.200.16.10 port 41134 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:58:11.693564 sshd-session[4767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:58:11.696851 systemd-logind[1890]: New session 10 of user core. Mar 12 02:58:11.708931 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 12 02:58:11.984982 sshd[4770]: Connection closed by 10.200.16.10 port 41134 Mar 12 02:58:11.986160 sshd-session[4767]: pam_unix(sshd:session): session closed for user core Mar 12 02:58:11.988906 systemd[1]: sshd@7-10.200.20.34:22-10.200.16.10:41134.service: Deactivated successfully. Mar 12 02:58:11.990551 systemd[1]: session-10.scope: Deactivated successfully. Mar 12 02:58:11.991281 systemd-logind[1890]: Session 10 logged out. Waiting for processes to exit. Mar 12 02:58:11.992655 systemd-logind[1890]: Removed session 10. Mar 12 02:58:17.077957 systemd[1]: Started sshd@8-10.200.20.34:22-10.200.16.10:41148.service - OpenSSH per-connection server daemon (10.200.16.10:41148). Mar 12 02:58:17.494590 sshd[4784]: Accepted publickey for core from 10.200.16.10 port 41148 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:58:17.495409 sshd-session[4784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:58:17.499161 systemd-logind[1890]: New session 11 of user core. Mar 12 02:58:17.506931 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 12 02:58:17.765895 sshd[4787]: Connection closed by 10.200.16.10 port 41148 Mar 12 02:58:17.766301 sshd-session[4784]: pam_unix(sshd:session): session closed for user core Mar 12 02:58:17.769955 systemd[1]: sshd@8-10.200.20.34:22-10.200.16.10:41148.service: Deactivated successfully. Mar 12 02:58:17.772142 systemd[1]: session-11.scope: Deactivated successfully. Mar 12 02:58:17.772960 systemd-logind[1890]: Session 11 logged out. Waiting for processes to exit. Mar 12 02:58:17.774346 systemd-logind[1890]: Removed session 11. Mar 12 02:58:22.860857 systemd[1]: Started sshd@9-10.200.20.34:22-10.200.16.10:49096.service - OpenSSH per-connection server daemon (10.200.16.10:49096). Mar 12 02:58:23.280012 sshd[4801]: Accepted publickey for core from 10.200.16.10 port 49096 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:58:23.280707 sshd-session[4801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:58:23.284138 systemd-logind[1890]: New session 12 of user core. Mar 12 02:58:23.291917 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 12 02:58:23.551111 sshd[4804]: Connection closed by 10.200.16.10 port 49096 Mar 12 02:58:23.551505 sshd-session[4801]: pam_unix(sshd:session): session closed for user core Mar 12 02:58:23.554893 systemd[1]: sshd@9-10.200.20.34:22-10.200.16.10:49096.service: Deactivated successfully. Mar 12 02:58:23.556416 systemd[1]: session-12.scope: Deactivated successfully. Mar 12 02:58:23.557093 systemd-logind[1890]: Session 12 logged out. Waiting for processes to exit. Mar 12 02:58:23.558885 systemd-logind[1890]: Removed session 12. Mar 12 02:58:28.642901 systemd[1]: Started sshd@10-10.200.20.34:22-10.200.16.10:49100.service - OpenSSH per-connection server daemon (10.200.16.10:49100). Mar 12 02:58:29.064920 sshd[4816]: Accepted publickey for core from 10.200.16.10 port 49100 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:58:29.065615 sshd-session[4816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:58:29.069243 systemd-logind[1890]: New session 13 of user core. Mar 12 02:58:29.076929 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 12 02:58:29.338593 sshd[4819]: Connection closed by 10.200.16.10 port 49100 Mar 12 02:58:29.339100 sshd-session[4816]: pam_unix(sshd:session): session closed for user core Mar 12 02:58:29.342461 systemd[1]: sshd@10-10.200.20.34:22-10.200.16.10:49100.service: Deactivated successfully. Mar 12 02:58:29.344716 systemd[1]: session-13.scope: Deactivated successfully. Mar 12 02:58:29.345667 systemd-logind[1890]: Session 13 logged out. Waiting for processes to exit. Mar 12 02:58:29.347053 systemd-logind[1890]: Removed session 13. Mar 12 02:58:29.429922 systemd[1]: Started sshd@11-10.200.20.34:22-10.200.16.10:49112.service - OpenSSH per-connection server daemon (10.200.16.10:49112). Mar 12 02:58:29.847854 sshd[4832]: Accepted publickey for core from 10.200.16.10 port 49112 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:58:29.848908 sshd-session[4832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:58:29.852654 systemd-logind[1890]: New session 14 of user core. Mar 12 02:58:29.862963 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 12 02:58:30.146019 sshd[4835]: Connection closed by 10.200.16.10 port 49112 Mar 12 02:58:30.146484 sshd-session[4832]: pam_unix(sshd:session): session closed for user core Mar 12 02:58:30.149477 systemd-logind[1890]: Session 14 logged out. Waiting for processes to exit. Mar 12 02:58:30.149605 systemd[1]: sshd@11-10.200.20.34:22-10.200.16.10:49112.service: Deactivated successfully. Mar 12 02:58:30.151482 systemd[1]: session-14.scope: Deactivated successfully. Mar 12 02:58:30.153504 systemd-logind[1890]: Removed session 14. Mar 12 02:58:30.233492 systemd[1]: Started sshd@12-10.200.20.34:22-10.200.16.10:43732.service - OpenSSH per-connection server daemon (10.200.16.10:43732). Mar 12 02:58:30.660105 sshd[4845]: Accepted publickey for core from 10.200.16.10 port 43732 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:58:30.661195 sshd-session[4845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:58:30.664852 systemd-logind[1890]: New session 15 of user core. Mar 12 02:58:30.670921 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 12 02:58:30.936103 sshd[4848]: Connection closed by 10.200.16.10 port 43732 Mar 12 02:58:30.936689 sshd-session[4845]: pam_unix(sshd:session): session closed for user core Mar 12 02:58:30.940309 systemd-logind[1890]: Session 15 logged out. Waiting for processes to exit. Mar 12 02:58:30.940971 systemd[1]: sshd@12-10.200.20.34:22-10.200.16.10:43732.service: Deactivated successfully. Mar 12 02:58:30.943086 systemd[1]: session-15.scope: Deactivated successfully. Mar 12 02:58:30.944391 systemd-logind[1890]: Removed session 15. Mar 12 02:58:36.030029 systemd[1]: Started sshd@13-10.200.20.34:22-10.200.16.10:43736.service - OpenSSH per-connection server daemon (10.200.16.10:43736). Mar 12 02:58:36.450840 sshd[4859]: Accepted publickey for core from 10.200.16.10 port 43736 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:58:36.451562 sshd-session[4859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:58:36.455646 systemd-logind[1890]: New session 16 of user core. Mar 12 02:58:36.457941 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 12 02:58:36.722069 sshd[4862]: Connection closed by 10.200.16.10 port 43736 Mar 12 02:58:36.722676 sshd-session[4859]: pam_unix(sshd:session): session closed for user core Mar 12 02:58:36.727256 systemd[1]: sshd@13-10.200.20.34:22-10.200.16.10:43736.service: Deactivated successfully. Mar 12 02:58:36.729769 systemd[1]: session-16.scope: Deactivated successfully. Mar 12 02:58:36.733171 systemd-logind[1890]: Session 16 logged out. Waiting for processes to exit. Mar 12 02:58:36.734976 systemd-logind[1890]: Removed session 16. Mar 12 02:58:36.810141 systemd[1]: Started sshd@14-10.200.20.34:22-10.200.16.10:43744.service - OpenSSH per-connection server daemon (10.200.16.10:43744). Mar 12 02:58:37.224028 sshd[4874]: Accepted publickey for core from 10.200.16.10 port 43744 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:58:37.225089 sshd-session[4874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:58:37.228488 systemd-logind[1890]: New session 17 of user core. Mar 12 02:58:37.236095 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 12 02:58:37.525184 sshd[4877]: Connection closed by 10.200.16.10 port 43744 Mar 12 02:58:37.525841 sshd-session[4874]: pam_unix(sshd:session): session closed for user core Mar 12 02:58:37.529280 systemd[1]: sshd@14-10.200.20.34:22-10.200.16.10:43744.service: Deactivated successfully. Mar 12 02:58:37.531235 systemd[1]: session-17.scope: Deactivated successfully. Mar 12 02:58:37.531882 systemd-logind[1890]: Session 17 logged out. Waiting for processes to exit. Mar 12 02:58:37.533164 systemd-logind[1890]: Removed session 17. Mar 12 02:58:37.614274 systemd[1]: Started sshd@15-10.200.20.34:22-10.200.16.10:43752.service - OpenSSH per-connection server daemon (10.200.16.10:43752). Mar 12 02:58:38.027653 sshd[4887]: Accepted publickey for core from 10.200.16.10 port 43752 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:58:38.028679 sshd-session[4887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:58:38.031921 systemd-logind[1890]: New session 18 of user core. Mar 12 02:58:38.039912 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 12 02:58:38.682578 sshd[4890]: Connection closed by 10.200.16.10 port 43752 Mar 12 02:58:38.683105 sshd-session[4887]: pam_unix(sshd:session): session closed for user core Mar 12 02:58:38.686226 systemd[1]: sshd@15-10.200.20.34:22-10.200.16.10:43752.service: Deactivated successfully. Mar 12 02:58:38.688139 systemd[1]: session-18.scope: Deactivated successfully. Mar 12 02:58:38.689880 systemd-logind[1890]: Session 18 logged out. Waiting for processes to exit. Mar 12 02:58:38.691312 systemd-logind[1890]: Removed session 18. Mar 12 02:58:38.770945 systemd[1]: Started sshd@16-10.200.20.34:22-10.200.16.10:43756.service - OpenSSH per-connection server daemon (10.200.16.10:43756). Mar 12 02:58:39.187345 sshd[4905]: Accepted publickey for core from 10.200.16.10 port 43756 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:58:39.188376 sshd-session[4905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:58:39.191670 systemd-logind[1890]: New session 19 of user core. Mar 12 02:58:39.196925 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 12 02:58:39.536940 sshd[4908]: Connection closed by 10.200.16.10 port 43756 Mar 12 02:58:39.537420 sshd-session[4905]: pam_unix(sshd:session): session closed for user core Mar 12 02:58:39.540680 systemd-logind[1890]: Session 19 logged out. Waiting for processes to exit. Mar 12 02:58:39.541340 systemd[1]: sshd@16-10.200.20.34:22-10.200.16.10:43756.service: Deactivated successfully. Mar 12 02:58:39.543407 systemd[1]: session-19.scope: Deactivated successfully. Mar 12 02:58:39.545103 systemd-logind[1890]: Removed session 19. Mar 12 02:58:39.621049 systemd[1]: Started sshd@17-10.200.20.34:22-10.200.16.10:43770.service - OpenSSH per-connection server daemon (10.200.16.10:43770). Mar 12 02:58:40.040742 sshd[4920]: Accepted publickey for core from 10.200.16.10 port 43770 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:58:40.041445 sshd-session[4920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:58:40.044718 systemd-logind[1890]: New session 20 of user core. Mar 12 02:58:40.050920 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 12 02:58:40.311786 sshd[4923]: Connection closed by 10.200.16.10 port 43770 Mar 12 02:58:40.313053 sshd-session[4920]: pam_unix(sshd:session): session closed for user core Mar 12 02:58:40.316713 systemd[1]: sshd@17-10.200.20.34:22-10.200.16.10:43770.service: Deactivated successfully. Mar 12 02:58:40.319043 systemd[1]: session-20.scope: Deactivated successfully. Mar 12 02:58:40.319740 systemd-logind[1890]: Session 20 logged out. Waiting for processes to exit. Mar 12 02:58:40.320854 systemd-logind[1890]: Removed session 20. Mar 12 02:58:45.404443 systemd[1]: Started sshd@18-10.200.20.34:22-10.200.16.10:48322.service - OpenSSH per-connection server daemon (10.200.16.10:48322). Mar 12 02:58:45.829379 sshd[4939]: Accepted publickey for core from 10.200.16.10 port 48322 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:58:45.830367 sshd-session[4939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:58:45.833745 systemd-logind[1890]: New session 21 of user core. Mar 12 02:58:45.845990 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 12 02:58:46.099185 sshd[4942]: Connection closed by 10.200.16.10 port 48322 Mar 12 02:58:46.099752 sshd-session[4939]: pam_unix(sshd:session): session closed for user core Mar 12 02:58:46.103200 systemd-logind[1890]: Session 21 logged out. Waiting for processes to exit. Mar 12 02:58:46.103867 systemd[1]: sshd@18-10.200.20.34:22-10.200.16.10:48322.service: Deactivated successfully. Mar 12 02:58:46.106041 systemd[1]: session-21.scope: Deactivated successfully. Mar 12 02:58:46.107598 systemd-logind[1890]: Removed session 21. Mar 12 02:58:51.188022 systemd[1]: Started sshd@19-10.200.20.34:22-10.200.16.10:45522.service - OpenSSH per-connection server daemon (10.200.16.10:45522). Mar 12 02:58:51.617185 sshd[4956]: Accepted publickey for core from 10.200.16.10 port 45522 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:58:51.618253 sshd-session[4956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:58:51.622254 systemd-logind[1890]: New session 22 of user core. Mar 12 02:58:51.624924 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 12 02:58:51.891524 sshd[4959]: Connection closed by 10.200.16.10 port 45522 Mar 12 02:58:51.892165 sshd-session[4956]: pam_unix(sshd:session): session closed for user core Mar 12 02:58:51.895794 systemd-logind[1890]: Session 22 logged out. Waiting for processes to exit. Mar 12 02:58:51.896428 systemd[1]: sshd@19-10.200.20.34:22-10.200.16.10:45522.service: Deactivated successfully. Mar 12 02:58:51.898326 systemd[1]: session-22.scope: Deactivated successfully. Mar 12 02:58:51.899623 systemd-logind[1890]: Removed session 22. Mar 12 02:58:51.981431 systemd[1]: Started sshd@20-10.200.20.34:22-10.200.16.10:45528.service - OpenSSH per-connection server daemon (10.200.16.10:45528). Mar 12 02:58:52.396502 sshd[4970]: Accepted publickey for core from 10.200.16.10 port 45528 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:58:52.397279 sshd-session[4970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:58:52.400495 systemd-logind[1890]: New session 23 of user core. Mar 12 02:58:52.412936 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 12 02:58:53.835782 containerd[1908]: time="2026-03-12T02:58:53.835743013Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 12 02:58:53.840331 containerd[1908]: time="2026-03-12T02:58:53.840232376Z" level=info msg="StopContainer for \"4593cc87149965b8a81d2c270452793257ee3a5e368643185ecb8625323da4e8\" with timeout 30 (s)" Mar 12 02:58:53.841300 containerd[1908]: time="2026-03-12T02:58:53.840982941Z" level=info msg="Stop container \"4593cc87149965b8a81d2c270452793257ee3a5e368643185ecb8625323da4e8\" with signal terminated" Mar 12 02:58:53.843518 containerd[1908]: time="2026-03-12T02:58:53.843413737Z" level=info msg="StopContainer for \"869dd5c8562481f3636f7adc47203e5b516056072231cf8fc4c22083d5dc32ab\" with timeout 2 (s)" Mar 12 02:58:53.844649 containerd[1908]: time="2026-03-12T02:58:53.844563229Z" level=info msg="Stop container \"869dd5c8562481f3636f7adc47203e5b516056072231cf8fc4c22083d5dc32ab\" with signal terminated" Mar 12 02:58:53.854436 systemd-networkd[1498]: lxc_health: Link DOWN Mar 12 02:58:53.854682 systemd-networkd[1498]: lxc_health: Lost carrier Mar 12 02:58:53.861427 systemd[1]: cri-containerd-4593cc87149965b8a81d2c270452793257ee3a5e368643185ecb8625323da4e8.scope: Deactivated successfully. Mar 12 02:58:53.865393 containerd[1908]: time="2026-03-12T02:58:53.863794824Z" level=info msg="received container exit event container_id:\"4593cc87149965b8a81d2c270452793257ee3a5e368643185ecb8625323da4e8\" id:\"4593cc87149965b8a81d2c270452793257ee3a5e368643185ecb8625323da4e8\" pid:3963 exited_at:{seconds:1773284333 nanos:863602657}" Mar 12 02:58:53.877472 systemd[1]: cri-containerd-869dd5c8562481f3636f7adc47203e5b516056072231cf8fc4c22083d5dc32ab.scope: Deactivated successfully. Mar 12 02:58:53.877894 systemd[1]: cri-containerd-869dd5c8562481f3636f7adc47203e5b516056072231cf8fc4c22083d5dc32ab.scope: Consumed 4.281s CPU time, 125.1M memory peak, 112K read from disk, 12.9M written to disk. Mar 12 02:58:53.879469 containerd[1908]: time="2026-03-12T02:58:53.879375440Z" level=info msg="received container exit event container_id:\"869dd5c8562481f3636f7adc47203e5b516056072231cf8fc4c22083d5dc32ab\" id:\"869dd5c8562481f3636f7adc47203e5b516056072231cf8fc4c22083d5dc32ab\" pid:4076 exited_at:{seconds:1773284333 nanos:878230173}" Mar 12 02:58:53.884196 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4593cc87149965b8a81d2c270452793257ee3a5e368643185ecb8625323da4e8-rootfs.mount: Deactivated successfully. Mar 12 02:58:53.894832 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-869dd5c8562481f3636f7adc47203e5b516056072231cf8fc4c22083d5dc32ab-rootfs.mount: Deactivated successfully. Mar 12 02:58:53.983967 containerd[1908]: time="2026-03-12T02:58:53.983935239Z" level=info msg="StopContainer for \"869dd5c8562481f3636f7adc47203e5b516056072231cf8fc4c22083d5dc32ab\" returns successfully" Mar 12 02:58:53.984509 containerd[1908]: time="2026-03-12T02:58:53.984489804Z" level=info msg="StopPodSandbox for \"a635061c22127b7a16d505602f0ddb8a7a83edd16c7e1901112d1502fdc34947\"" Mar 12 02:58:53.984727 containerd[1908]: time="2026-03-12T02:58:53.984626841Z" level=info msg="Container to stop \"942eb4b017db4514e6adaf90300d8f6e6053d33331d5b6c400b2624e48d3cd70\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 02:58:53.984727 containerd[1908]: time="2026-03-12T02:58:53.984658370Z" level=info msg="Container to stop \"cc3bfdf97ebaa9bdfc71378f2def5c6a82c05300316d7e22961eed0ebb925996\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 02:58:53.984727 containerd[1908]: time="2026-03-12T02:58:53.984666346Z" level=info msg="Container to stop \"c398764667396af820ed3939c7d1768f23765c996c64aff0c903d9255658032d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 02:58:53.984727 containerd[1908]: time="2026-03-12T02:58:53.984673899Z" level=info msg="Container to stop \"869dd5c8562481f3636f7adc47203e5b516056072231cf8fc4c22083d5dc32ab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 02:58:53.984727 containerd[1908]: time="2026-03-12T02:58:53.984679619Z" level=info msg="Container to stop \"0499becf3de0364c9115dea8a9f12bcb06041ac7b9ac488d3fd25ad5a2b7a607\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 02:58:53.987358 containerd[1908]: time="2026-03-12T02:58:53.987296294Z" level=info msg="StopContainer for \"4593cc87149965b8a81d2c270452793257ee3a5e368643185ecb8625323da4e8\" returns successfully" Mar 12 02:58:53.988033 containerd[1908]: time="2026-03-12T02:58:53.987994113Z" level=info msg="StopPodSandbox for \"ae95e49fc60b5b0a1e9200afb49e6772e306ba597db719443a10675d337fd308\"" Mar 12 02:58:53.988244 containerd[1908]: time="2026-03-12T02:58:53.988111605Z" level=info msg="Container to stop \"4593cc87149965b8a81d2c270452793257ee3a5e368643185ecb8625323da4e8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 02:58:53.990093 systemd[1]: cri-containerd-a635061c22127b7a16d505602f0ddb8a7a83edd16c7e1901112d1502fdc34947.scope: Deactivated successfully. Mar 12 02:58:53.991655 containerd[1908]: time="2026-03-12T02:58:53.991587602Z" level=info msg="received sandbox exit event container_id:\"a635061c22127b7a16d505602f0ddb8a7a83edd16c7e1901112d1502fdc34947\" id:\"a635061c22127b7a16d505602f0ddb8a7a83edd16c7e1901112d1502fdc34947\" exit_status:137 exited_at:{seconds:1773284333 nanos:991200995}" monitor_name=podsandbox Mar 12 02:58:53.995649 systemd[1]: cri-containerd-ae95e49fc60b5b0a1e9200afb49e6772e306ba597db719443a10675d337fd308.scope: Deactivated successfully. Mar 12 02:58:54.002180 containerd[1908]: time="2026-03-12T02:58:54.001988685Z" level=info msg="received sandbox exit event container_id:\"ae95e49fc60b5b0a1e9200afb49e6772e306ba597db719443a10675d337fd308\" id:\"ae95e49fc60b5b0a1e9200afb49e6772e306ba597db719443a10675d337fd308\" exit_status:137 exited_at:{seconds:1773284334 nanos:1640112}" monitor_name=podsandbox Mar 12 02:58:54.014451 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a635061c22127b7a16d505602f0ddb8a7a83edd16c7e1901112d1502fdc34947-rootfs.mount: Deactivated successfully. Mar 12 02:58:54.021043 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae95e49fc60b5b0a1e9200afb49e6772e306ba597db719443a10675d337fd308-rootfs.mount: Deactivated successfully. Mar 12 02:58:54.024975 containerd[1908]: time="2026-03-12T02:58:54.024831962Z" level=info msg="shim disconnected" id=a635061c22127b7a16d505602f0ddb8a7a83edd16c7e1901112d1502fdc34947 namespace=k8s.io Mar 12 02:58:54.024975 containerd[1908]: time="2026-03-12T02:58:54.024855427Z" level=warning msg="cleaning up after shim disconnected" id=a635061c22127b7a16d505602f0ddb8a7a83edd16c7e1901112d1502fdc34947 namespace=k8s.io Mar 12 02:58:54.024975 containerd[1908]: time="2026-03-12T02:58:54.024874243Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 02:58:54.025548 containerd[1908]: time="2026-03-12T02:58:54.025529948Z" level=info msg="shim disconnected" id=ae95e49fc60b5b0a1e9200afb49e6772e306ba597db719443a10675d337fd308 namespace=k8s.io Mar 12 02:58:54.026329 containerd[1908]: time="2026-03-12T02:58:54.026211174Z" level=warning msg="cleaning up after shim disconnected" id=ae95e49fc60b5b0a1e9200afb49e6772e306ba597db719443a10675d337fd308 namespace=k8s.io Mar 12 02:58:54.026329 containerd[1908]: time="2026-03-12T02:58:54.026241679Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 02:58:54.034360 containerd[1908]: time="2026-03-12T02:58:54.034332467Z" level=info msg="received sandbox container exit event sandbox_id:\"a635061c22127b7a16d505602f0ddb8a7a83edd16c7e1901112d1502fdc34947\" exit_status:137 exited_at:{seconds:1773284333 nanos:991200995}" monitor_name=criService Mar 12 02:58:54.035827 containerd[1908]: time="2026-03-12T02:58:54.035784322Z" level=info msg="received sandbox container exit event sandbox_id:\"ae95e49fc60b5b0a1e9200afb49e6772e306ba597db719443a10675d337fd308\" exit_status:137 exited_at:{seconds:1773284334 nanos:1640112}" monitor_name=criService Mar 12 02:58:54.036225 containerd[1908]: time="2026-03-12T02:58:54.036174681Z" level=info msg="TearDown network for sandbox \"ae95e49fc60b5b0a1e9200afb49e6772e306ba597db719443a10675d337fd308\" successfully" Mar 12 02:58:54.036225 containerd[1908]: time="2026-03-12T02:58:54.036193530Z" level=info msg="StopPodSandbox for \"ae95e49fc60b5b0a1e9200afb49e6772e306ba597db719443a10675d337fd308\" returns successfully" Mar 12 02:58:54.036471 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a635061c22127b7a16d505602f0ddb8a7a83edd16c7e1901112d1502fdc34947-shm.mount: Deactivated successfully. Mar 12 02:58:54.036665 containerd[1908]: time="2026-03-12T02:58:54.036599673Z" level=info msg="TearDown network for sandbox \"a635061c22127b7a16d505602f0ddb8a7a83edd16c7e1901112d1502fdc34947\" successfully" Mar 12 02:58:54.036665 containerd[1908]: time="2026-03-12T02:58:54.036641603Z" level=info msg="StopPodSandbox for \"a635061c22127b7a16d505602f0ddb8a7a83edd16c7e1901112d1502fdc34947\" returns successfully" Mar 12 02:58:54.078612 kubelet[3437]: I0312 02:58:54.078518 3437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/46c28e00-9663-4822-8099-c81f5d7ff3ae-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/46c28e00-9663-4822-8099-c81f5d7ff3ae-cilium-config-path\") pod \"46c28e00-9663-4822-8099-c81f5d7ff3ae\" (UID: \"46c28e00-9663-4822-8099-c81f5d7ff3ae\") " Mar 12 02:58:54.080112 kubelet[3437]: I0312 02:58:54.080089 3437 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46c28e00-9663-4822-8099-c81f5d7ff3ae-cilium-config-path" pod "46c28e00-9663-4822-8099-c81f5d7ff3ae" (UID: "46c28e00-9663-4822-8099-c81f5d7ff3ae"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 12 02:58:54.169921 kubelet[3437]: I0312 02:58:54.169344 3437 scope.go:122] "RemoveContainer" containerID="4593cc87149965b8a81d2c270452793257ee3a5e368643185ecb8625323da4e8" Mar 12 02:58:54.172658 containerd[1908]: time="2026-03-12T02:58:54.172488055Z" level=info msg="RemoveContainer for \"4593cc87149965b8a81d2c270452793257ee3a5e368643185ecb8625323da4e8\"" Mar 12 02:58:54.179538 kubelet[3437]: I0312 02:58:54.179515 3437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-cilium-run\" (UniqueName: \"kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-cilium-run\") pod \"9575640c-f5fc-4eca-9b78-a781b5903216\" (UID: \"9575640c-f5fc-4eca-9b78-a781b5903216\") " Mar 12 02:58:54.182895 kubelet[3437]: I0312 02:58:54.179609 3437 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-cilium-run" pod "9575640c-f5fc-4eca-9b78-a781b5903216" (UID: "9575640c-f5fc-4eca-9b78-a781b5903216"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 02:58:54.182895 kubelet[3437]: I0312 02:58:54.179629 3437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-host-proc-sys-net\") pod \"9575640c-f5fc-4eca-9b78-a781b5903216\" (UID: \"9575640c-f5fc-4eca-9b78-a781b5903216\") " Mar 12 02:58:54.182895 kubelet[3437]: I0312 02:58:54.179647 3437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-host-proc-sys-kernel\") pod \"9575640c-f5fc-4eca-9b78-a781b5903216\" (UID: \"9575640c-f5fc-4eca-9b78-a781b5903216\") " Mar 12 02:58:54.182895 kubelet[3437]: I0312 02:58:54.179642 3437 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-host-proc-sys-net" pod "9575640c-f5fc-4eca-9b78-a781b5903216" (UID: "9575640c-f5fc-4eca-9b78-a781b5903216"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 02:58:54.182895 kubelet[3437]: I0312 02:58:54.179684 3437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-cni-path\" (UniqueName: \"kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-cni-path\") pod \"9575640c-f5fc-4eca-9b78-a781b5903216\" (UID: \"9575640c-f5fc-4eca-9b78-a781b5903216\") " Mar 12 02:58:54.183009 kubelet[3437]: I0312 02:58:54.179740 3437 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-cni-path" pod "9575640c-f5fc-4eca-9b78-a781b5903216" (UID: "9575640c-f5fc-4eca-9b78-a781b5903216"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 02:58:54.183009 kubelet[3437]: I0312 02:58:54.179757 3437 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-host-proc-sys-kernel" pod "9575640c-f5fc-4eca-9b78-a781b5903216" (UID: "9575640c-f5fc-4eca-9b78-a781b5903216"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 02:58:54.183009 kubelet[3437]: I0312 02:58:54.179835 3437 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-cilium-cgroup" pod "9575640c-f5fc-4eca-9b78-a781b5903216" (UID: "9575640c-f5fc-4eca-9b78-a781b5903216"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 02:58:54.183009 kubelet[3437]: I0312 02:58:54.179851 3437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-cilium-cgroup\") pod \"9575640c-f5fc-4eca-9b78-a781b5903216\" (UID: \"9575640c-f5fc-4eca-9b78-a781b5903216\") " Mar 12 02:58:54.183009 kubelet[3437]: I0312 02:58:54.179885 3437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/9575640c-f5fc-4eca-9b78-a781b5903216-clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9575640c-f5fc-4eca-9b78-a781b5903216-clustermesh-secrets\") pod \"9575640c-f5fc-4eca-9b78-a781b5903216\" (UID: \"9575640c-f5fc-4eca-9b78-a781b5903216\") " Mar 12 02:58:54.183085 kubelet[3437]: I0312 02:58:54.179904 3437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/46c28e00-9663-4822-8099-c81f5d7ff3ae-kube-api-access-4hpzd\" (UniqueName: \"kubernetes.io/projected/46c28e00-9663-4822-8099-c81f5d7ff3ae-kube-api-access-4hpzd\") pod \"46c28e00-9663-4822-8099-c81f5d7ff3ae\" (UID: \"46c28e00-9663-4822-8099-c81f5d7ff3ae\") " Mar 12 02:58:54.183085 kubelet[3437]: I0312 02:58:54.179996 3437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-hostproc\" (UniqueName: \"kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-hostproc\") pod \"9575640c-f5fc-4eca-9b78-a781b5903216\" (UID: \"9575640c-f5fc-4eca-9b78-a781b5903216\") " Mar 12 02:58:54.183085 kubelet[3437]: I0312 02:58:54.180014 3437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-bpf-maps\") pod \"9575640c-f5fc-4eca-9b78-a781b5903216\" (UID: \"9575640c-f5fc-4eca-9b78-a781b5903216\") " Mar 12 02:58:54.183085 kubelet[3437]: I0312 02:58:54.180028 3437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/9575640c-f5fc-4eca-9b78-a781b5903216-kube-api-access-trnrw\" (UniqueName: \"kubernetes.io/projected/9575640c-f5fc-4eca-9b78-a781b5903216-kube-api-access-trnrw\") pod \"9575640c-f5fc-4eca-9b78-a781b5903216\" (UID: \"9575640c-f5fc-4eca-9b78-a781b5903216\") " Mar 12 02:58:54.183085 kubelet[3437]: I0312 02:58:54.180038 3437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-xtables-lock\") pod \"9575640c-f5fc-4eca-9b78-a781b5903216\" (UID: \"9575640c-f5fc-4eca-9b78-a781b5903216\") " Mar 12 02:58:54.183160 kubelet[3437]: I0312 02:58:54.180272 3437 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-bpf-maps" pod "9575640c-f5fc-4eca-9b78-a781b5903216" (UID: "9575640c-f5fc-4eca-9b78-a781b5903216"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 02:58:54.183160 kubelet[3437]: I0312 02:58:54.180048 3437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/9575640c-f5fc-4eca-9b78-a781b5903216-hubble-tls\" (UniqueName: \"kubernetes.io/projected/9575640c-f5fc-4eca-9b78-a781b5903216-hubble-tls\") pod \"9575640c-f5fc-4eca-9b78-a781b5903216\" (UID: \"9575640c-f5fc-4eca-9b78-a781b5903216\") " Mar 12 02:58:54.183160 kubelet[3437]: I0312 02:58:54.180452 3437 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-hostproc" pod "9575640c-f5fc-4eca-9b78-a781b5903216" (UID: "9575640c-f5fc-4eca-9b78-a781b5903216"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 02:58:54.183160 kubelet[3437]: I0312 02:58:54.180455 3437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-etc-cni-netd\") pod \"9575640c-f5fc-4eca-9b78-a781b5903216\" (UID: \"9575640c-f5fc-4eca-9b78-a781b5903216\") " Mar 12 02:58:54.183160 kubelet[3437]: I0312 02:58:54.180482 3437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/9575640c-f5fc-4eca-9b78-a781b5903216-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9575640c-f5fc-4eca-9b78-a781b5903216-cilium-config-path\") pod \"9575640c-f5fc-4eca-9b78-a781b5903216\" (UID: \"9575640c-f5fc-4eca-9b78-a781b5903216\") " Mar 12 02:58:54.183233 kubelet[3437]: I0312 02:58:54.180496 3437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-lib-modules\" (UniqueName: \"kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-lib-modules\") pod \"9575640c-f5fc-4eca-9b78-a781b5903216\" (UID: \"9575640c-f5fc-4eca-9b78-a781b5903216\") " Mar 12 02:58:54.183233 kubelet[3437]: I0312 02:58:54.180524 3437 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-cilium-run\") on node \"ci-4459.2.4-n-4fd21a1aad\" DevicePath \"\"" Mar 12 02:58:54.183233 kubelet[3437]: I0312 02:58:54.180531 3437 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-host-proc-sys-net\") on node \"ci-4459.2.4-n-4fd21a1aad\" DevicePath \"\"" Mar 12 02:58:54.183233 kubelet[3437]: I0312 02:58:54.180538 3437 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-host-proc-sys-kernel\") on node \"ci-4459.2.4-n-4fd21a1aad\" DevicePath \"\"" Mar 12 02:58:54.183233 kubelet[3437]: I0312 02:58:54.180544 3437 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-cni-path\") on node \"ci-4459.2.4-n-4fd21a1aad\" DevicePath \"\"" Mar 12 02:58:54.183233 kubelet[3437]: I0312 02:58:54.180549 3437 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-cilium-cgroup\") on node \"ci-4459.2.4-n-4fd21a1aad\" DevicePath \"\"" Mar 12 02:58:54.183233 kubelet[3437]: I0312 02:58:54.180557 3437 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/46c28e00-9663-4822-8099-c81f5d7ff3ae-cilium-config-path\") on node \"ci-4459.2.4-n-4fd21a1aad\" DevicePath \"\"" Mar 12 02:58:54.183332 kubelet[3437]: I0312 02:58:54.180563 3437 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-hostproc\") on node \"ci-4459.2.4-n-4fd21a1aad\" DevicePath \"\"" Mar 12 02:58:54.183332 kubelet[3437]: I0312 02:58:54.180568 3437 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-bpf-maps\") on node \"ci-4459.2.4-n-4fd21a1aad\" DevicePath \"\"" Mar 12 02:58:54.183332 kubelet[3437]: I0312 02:58:54.180585 3437 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-lib-modules" pod "9575640c-f5fc-4eca-9b78-a781b5903216" (UID: "9575640c-f5fc-4eca-9b78-a781b5903216"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 02:58:54.183332 kubelet[3437]: I0312 02:58:54.180768 3437 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-xtables-lock" pod "9575640c-f5fc-4eca-9b78-a781b5903216" (UID: "9575640c-f5fc-4eca-9b78-a781b5903216"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 02:58:54.183332 kubelet[3437]: I0312 02:58:54.180930 3437 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-etc-cni-netd" pod "9575640c-f5fc-4eca-9b78-a781b5903216" (UID: "9575640c-f5fc-4eca-9b78-a781b5903216"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 02:58:54.186584 kubelet[3437]: I0312 02:58:54.186548 3437 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9575640c-f5fc-4eca-9b78-a781b5903216-cilium-config-path" pod "9575640c-f5fc-4eca-9b78-a781b5903216" (UID: "9575640c-f5fc-4eca-9b78-a781b5903216"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 12 02:58:54.187726 kubelet[3437]: I0312 02:58:54.187194 3437 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46c28e00-9663-4822-8099-c81f5d7ff3ae-kube-api-access-4hpzd" pod "46c28e00-9663-4822-8099-c81f5d7ff3ae" (UID: "46c28e00-9663-4822-8099-c81f5d7ff3ae"). InnerVolumeSpecName "kube-api-access-4hpzd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 12 02:58:54.187726 kubelet[3437]: I0312 02:58:54.187280 3437 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9575640c-f5fc-4eca-9b78-a781b5903216-hubble-tls" pod "9575640c-f5fc-4eca-9b78-a781b5903216" (UID: "9575640c-f5fc-4eca-9b78-a781b5903216"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 12 02:58:54.189055 kubelet[3437]: I0312 02:58:54.189032 3437 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9575640c-f5fc-4eca-9b78-a781b5903216-kube-api-access-trnrw" pod "9575640c-f5fc-4eca-9b78-a781b5903216" (UID: "9575640c-f5fc-4eca-9b78-a781b5903216"). InnerVolumeSpecName "kube-api-access-trnrw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 12 02:58:54.189217 kubelet[3437]: I0312 02:58:54.189182 3437 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9575640c-f5fc-4eca-9b78-a781b5903216-clustermesh-secrets" pod "9575640c-f5fc-4eca-9b78-a781b5903216" (UID: "9575640c-f5fc-4eca-9b78-a781b5903216"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 12 02:58:54.191370 containerd[1908]: time="2026-03-12T02:58:54.191343876Z" level=info msg="RemoveContainer for \"4593cc87149965b8a81d2c270452793257ee3a5e368643185ecb8625323da4e8\" returns successfully" Mar 12 02:58:54.191534 kubelet[3437]: I0312 02:58:54.191515 3437 scope.go:122] "RemoveContainer" containerID="4593cc87149965b8a81d2c270452793257ee3a5e368643185ecb8625323da4e8" Mar 12 02:58:54.191750 containerd[1908]: time="2026-03-12T02:58:54.191689897Z" level=error msg="ContainerStatus for \"4593cc87149965b8a81d2c270452793257ee3a5e368643185ecb8625323da4e8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4593cc87149965b8a81d2c270452793257ee3a5e368643185ecb8625323da4e8\": not found" Mar 12 02:58:54.191882 kubelet[3437]: E0312 02:58:54.191791 3437 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4593cc87149965b8a81d2c270452793257ee3a5e368643185ecb8625323da4e8\": not found" containerID="4593cc87149965b8a81d2c270452793257ee3a5e368643185ecb8625323da4e8" Mar 12 02:58:54.191882 kubelet[3437]: I0312 02:58:54.191835 3437 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4593cc87149965b8a81d2c270452793257ee3a5e368643185ecb8625323da4e8"} err="failed to get container status \"4593cc87149965b8a81d2c270452793257ee3a5e368643185ecb8625323da4e8\": rpc error: code = NotFound desc = an error occurred when try to find container \"4593cc87149965b8a81d2c270452793257ee3a5e368643185ecb8625323da4e8\": not found" Mar 12 02:58:54.191882 kubelet[3437]: I0312 02:58:54.191860 3437 scope.go:122] "RemoveContainer" containerID="869dd5c8562481f3636f7adc47203e5b516056072231cf8fc4c22083d5dc32ab" Mar 12 02:58:54.193160 containerd[1908]: time="2026-03-12T02:58:54.193066918Z" level=info msg="RemoveContainer for \"869dd5c8562481f3636f7adc47203e5b516056072231cf8fc4c22083d5dc32ab\"" Mar 12 02:58:54.201518 containerd[1908]: time="2026-03-12T02:58:54.201493782Z" level=info msg="RemoveContainer for \"869dd5c8562481f3636f7adc47203e5b516056072231cf8fc4c22083d5dc32ab\" returns successfully" Mar 12 02:58:54.201679 kubelet[3437]: I0312 02:58:54.201659 3437 scope.go:122] "RemoveContainer" containerID="c398764667396af820ed3939c7d1768f23765c996c64aff0c903d9255658032d" Mar 12 02:58:54.203183 containerd[1908]: time="2026-03-12T02:58:54.202877739Z" level=info msg="RemoveContainer for \"c398764667396af820ed3939c7d1768f23765c996c64aff0c903d9255658032d\"" Mar 12 02:58:54.211386 containerd[1908]: time="2026-03-12T02:58:54.211364725Z" level=info msg="RemoveContainer for \"c398764667396af820ed3939c7d1768f23765c996c64aff0c903d9255658032d\" returns successfully" Mar 12 02:58:54.211693 kubelet[3437]: I0312 02:58:54.211666 3437 scope.go:122] "RemoveContainer" containerID="cc3bfdf97ebaa9bdfc71378f2def5c6a82c05300316d7e22961eed0ebb925996" Mar 12 02:58:54.213364 containerd[1908]: time="2026-03-12T02:58:54.213321864Z" level=info msg="RemoveContainer for \"cc3bfdf97ebaa9bdfc71378f2def5c6a82c05300316d7e22961eed0ebb925996\"" Mar 12 02:58:54.220698 containerd[1908]: time="2026-03-12T02:58:54.220637542Z" level=info msg="RemoveContainer for \"cc3bfdf97ebaa9bdfc71378f2def5c6a82c05300316d7e22961eed0ebb925996\" returns successfully" Mar 12 02:58:54.220929 kubelet[3437]: I0312 02:58:54.220886 3437 scope.go:122] "RemoveContainer" containerID="942eb4b017db4514e6adaf90300d8f6e6053d33331d5b6c400b2624e48d3cd70" Mar 12 02:58:54.222232 containerd[1908]: time="2026-03-12T02:58:54.222207962Z" level=info msg="RemoveContainer for \"942eb4b017db4514e6adaf90300d8f6e6053d33331d5b6c400b2624e48d3cd70\"" Mar 12 02:58:54.230918 containerd[1908]: time="2026-03-12T02:58:54.230894580Z" level=info msg="RemoveContainer for \"942eb4b017db4514e6adaf90300d8f6e6053d33331d5b6c400b2624e48d3cd70\" returns successfully" Mar 12 02:58:54.231130 kubelet[3437]: I0312 02:58:54.231057 3437 scope.go:122] "RemoveContainer" containerID="0499becf3de0364c9115dea8a9f12bcb06041ac7b9ac488d3fd25ad5a2b7a607" Mar 12 02:58:54.232228 containerd[1908]: time="2026-03-12T02:58:54.232208782Z" level=info msg="RemoveContainer for \"0499becf3de0364c9115dea8a9f12bcb06041ac7b9ac488d3fd25ad5a2b7a607\"" Mar 12 02:58:54.241143 containerd[1908]: time="2026-03-12T02:58:54.241041742Z" level=info msg="RemoveContainer for \"0499becf3de0364c9115dea8a9f12bcb06041ac7b9ac488d3fd25ad5a2b7a607\" returns successfully" Mar 12 02:58:54.241415 containerd[1908]: time="2026-03-12T02:58:54.241338033Z" level=error msg="ContainerStatus for \"869dd5c8562481f3636f7adc47203e5b516056072231cf8fc4c22083d5dc32ab\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"869dd5c8562481f3636f7adc47203e5b516056072231cf8fc4c22083d5dc32ab\": not found" Mar 12 02:58:54.241455 kubelet[3437]: I0312 02:58:54.241178 3437 scope.go:122] "RemoveContainer" containerID="869dd5c8562481f3636f7adc47203e5b516056072231cf8fc4c22083d5dc32ab" Mar 12 02:58:54.241455 kubelet[3437]: E0312 02:58:54.241435 3437 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"869dd5c8562481f3636f7adc47203e5b516056072231cf8fc4c22083d5dc32ab\": not found" containerID="869dd5c8562481f3636f7adc47203e5b516056072231cf8fc4c22083d5dc32ab" Mar 12 02:58:54.241498 kubelet[3437]: I0312 02:58:54.241454 3437 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"869dd5c8562481f3636f7adc47203e5b516056072231cf8fc4c22083d5dc32ab"} err="failed to get container status \"869dd5c8562481f3636f7adc47203e5b516056072231cf8fc4c22083d5dc32ab\": rpc error: code = NotFound desc = an error occurred when try to find container \"869dd5c8562481f3636f7adc47203e5b516056072231cf8fc4c22083d5dc32ab\": not found" Mar 12 02:58:54.241498 kubelet[3437]: I0312 02:58:54.241479 3437 scope.go:122] "RemoveContainer" containerID="c398764667396af820ed3939c7d1768f23765c996c64aff0c903d9255658032d" Mar 12 02:58:54.241698 containerd[1908]: time="2026-03-12T02:58:54.241674550Z" level=error msg="ContainerStatus for \"c398764667396af820ed3939c7d1768f23765c996c64aff0c903d9255658032d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c398764667396af820ed3939c7d1768f23765c996c64aff0c903d9255658032d\": not found" Mar 12 02:58:54.241864 kubelet[3437]: E0312 02:58:54.241843 3437 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c398764667396af820ed3939c7d1768f23765c996c64aff0c903d9255658032d\": not found" containerID="c398764667396af820ed3939c7d1768f23765c996c64aff0c903d9255658032d" Mar 12 02:58:54.241924 kubelet[3437]: I0312 02:58:54.241863 3437 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c398764667396af820ed3939c7d1768f23765c996c64aff0c903d9255658032d"} err="failed to get container status \"c398764667396af820ed3939c7d1768f23765c996c64aff0c903d9255658032d\": rpc error: code = NotFound desc = an error occurred when try to find container \"c398764667396af820ed3939c7d1768f23765c996c64aff0c903d9255658032d\": not found" Mar 12 02:58:54.241924 kubelet[3437]: I0312 02:58:54.241874 3437 scope.go:122] "RemoveContainer" containerID="cc3bfdf97ebaa9bdfc71378f2def5c6a82c05300316d7e22961eed0ebb925996" Mar 12 02:58:54.242100 containerd[1908]: time="2026-03-12T02:58:54.241998394Z" level=error msg="ContainerStatus for \"cc3bfdf97ebaa9bdfc71378f2def5c6a82c05300316d7e22961eed0ebb925996\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cc3bfdf97ebaa9bdfc71378f2def5c6a82c05300316d7e22961eed0ebb925996\": not found" Mar 12 02:58:54.242142 kubelet[3437]: E0312 02:58:54.242074 3437 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cc3bfdf97ebaa9bdfc71378f2def5c6a82c05300316d7e22961eed0ebb925996\": not found" containerID="cc3bfdf97ebaa9bdfc71378f2def5c6a82c05300316d7e22961eed0ebb925996" Mar 12 02:58:54.242278 kubelet[3437]: I0312 02:58:54.242205 3437 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cc3bfdf97ebaa9bdfc71378f2def5c6a82c05300316d7e22961eed0ebb925996"} err="failed to get container status \"cc3bfdf97ebaa9bdfc71378f2def5c6a82c05300316d7e22961eed0ebb925996\": rpc error: code = NotFound desc = an error occurred when try to find container \"cc3bfdf97ebaa9bdfc71378f2def5c6a82c05300316d7e22961eed0ebb925996\": not found" Mar 12 02:58:54.242278 kubelet[3437]: I0312 02:58:54.242228 3437 scope.go:122] "RemoveContainer" containerID="942eb4b017db4514e6adaf90300d8f6e6053d33331d5b6c400b2624e48d3cd70" Mar 12 02:58:54.242500 containerd[1908]: time="2026-03-12T02:58:54.242421994Z" level=error msg="ContainerStatus for \"942eb4b017db4514e6adaf90300d8f6e6053d33331d5b6c400b2624e48d3cd70\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"942eb4b017db4514e6adaf90300d8f6e6053d33331d5b6c400b2624e48d3cd70\": not found" Mar 12 02:58:54.242625 kubelet[3437]: E0312 02:58:54.242570 3437 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"942eb4b017db4514e6adaf90300d8f6e6053d33331d5b6c400b2624e48d3cd70\": not found" containerID="942eb4b017db4514e6adaf90300d8f6e6053d33331d5b6c400b2624e48d3cd70" Mar 12 02:58:54.242625 kubelet[3437]: I0312 02:58:54.242594 3437 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"942eb4b017db4514e6adaf90300d8f6e6053d33331d5b6c400b2624e48d3cd70"} err="failed to get container status \"942eb4b017db4514e6adaf90300d8f6e6053d33331d5b6c400b2624e48d3cd70\": rpc error: code = NotFound desc = an error occurred when try to find container \"942eb4b017db4514e6adaf90300d8f6e6053d33331d5b6c400b2624e48d3cd70\": not found" Mar 12 02:58:54.242625 kubelet[3437]: I0312 02:58:54.242605 3437 scope.go:122] "RemoveContainer" containerID="0499becf3de0364c9115dea8a9f12bcb06041ac7b9ac488d3fd25ad5a2b7a607" Mar 12 02:58:54.243028 containerd[1908]: time="2026-03-12T02:58:54.242888444Z" level=error msg="ContainerStatus for \"0499becf3de0364c9115dea8a9f12bcb06041ac7b9ac488d3fd25ad5a2b7a607\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0499becf3de0364c9115dea8a9f12bcb06041ac7b9ac488d3fd25ad5a2b7a607\": not found" Mar 12 02:58:54.243092 kubelet[3437]: E0312 02:58:54.243065 3437 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0499becf3de0364c9115dea8a9f12bcb06041ac7b9ac488d3fd25ad5a2b7a607\": not found" containerID="0499becf3de0364c9115dea8a9f12bcb06041ac7b9ac488d3fd25ad5a2b7a607" Mar 12 02:58:54.243092 kubelet[3437]: I0312 02:58:54.243082 3437 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0499becf3de0364c9115dea8a9f12bcb06041ac7b9ac488d3fd25ad5a2b7a607"} err="failed to get container status \"0499becf3de0364c9115dea8a9f12bcb06041ac7b9ac488d3fd25ad5a2b7a607\": rpc error: code = NotFound desc = an error occurred when try to find container \"0499becf3de0364c9115dea8a9f12bcb06041ac7b9ac488d3fd25ad5a2b7a607\": not found" Mar 12 02:58:54.281491 kubelet[3437]: I0312 02:58:54.281405 3437 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-trnrw\" (UniqueName: \"kubernetes.io/projected/9575640c-f5fc-4eca-9b78-a781b5903216-kube-api-access-trnrw\") on node \"ci-4459.2.4-n-4fd21a1aad\" DevicePath \"\"" Mar 12 02:58:54.281491 kubelet[3437]: I0312 02:58:54.281437 3437 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-xtables-lock\") on node \"ci-4459.2.4-n-4fd21a1aad\" DevicePath \"\"" Mar 12 02:58:54.281491 kubelet[3437]: I0312 02:58:54.281445 3437 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9575640c-f5fc-4eca-9b78-a781b5903216-hubble-tls\") on node \"ci-4459.2.4-n-4fd21a1aad\" DevicePath \"\"" Mar 12 02:58:54.281491 kubelet[3437]: I0312 02:58:54.281452 3437 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-etc-cni-netd\") on node \"ci-4459.2.4-n-4fd21a1aad\" DevicePath \"\"" Mar 12 02:58:54.281491 kubelet[3437]: I0312 02:58:54.281458 3437 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9575640c-f5fc-4eca-9b78-a781b5903216-cilium-config-path\") on node \"ci-4459.2.4-n-4fd21a1aad\" DevicePath \"\"" Mar 12 02:58:54.281491 kubelet[3437]: I0312 02:58:54.281463 3437 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9575640c-f5fc-4eca-9b78-a781b5903216-lib-modules\") on node \"ci-4459.2.4-n-4fd21a1aad\" DevicePath \"\"" Mar 12 02:58:54.281491 kubelet[3437]: I0312 02:58:54.281469 3437 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9575640c-f5fc-4eca-9b78-a781b5903216-clustermesh-secrets\") on node \"ci-4459.2.4-n-4fd21a1aad\" DevicePath \"\"" Mar 12 02:58:54.281491 kubelet[3437]: I0312 02:58:54.281475 3437 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hpzd\" (UniqueName: \"kubernetes.io/projected/46c28e00-9663-4822-8099-c81f5d7ff3ae-kube-api-access-4hpzd\") on node \"ci-4459.2.4-n-4fd21a1aad\" DevicePath \"\"" Mar 12 02:58:54.473462 systemd[1]: Removed slice kubepods-besteffort-pod46c28e00_9663_4822_8099_c81f5d7ff3ae.slice - libcontainer container kubepods-besteffort-pod46c28e00_9663_4822_8099_c81f5d7ff3ae.slice. Mar 12 02:58:54.479420 systemd[1]: Removed slice kubepods-burstable-pod9575640c_f5fc_4eca_9b78_a781b5903216.slice - libcontainer container kubepods-burstable-pod9575640c_f5fc_4eca_9b78_a781b5903216.slice. Mar 12 02:58:54.479515 systemd[1]: kubepods-burstable-pod9575640c_f5fc_4eca_9b78_a781b5903216.slice: Consumed 4.344s CPU time, 125.5M memory peak, 112K read from disk, 12.9M written to disk. Mar 12 02:58:54.883936 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ae95e49fc60b5b0a1e9200afb49e6772e306ba597db719443a10675d337fd308-shm.mount: Deactivated successfully. Mar 12 02:58:54.884031 systemd[1]: var-lib-kubelet-pods-46c28e00\x2d9663\x2d4822\x2d8099\x2dc81f5d7ff3ae-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4hpzd.mount: Deactivated successfully. Mar 12 02:58:54.884072 systemd[1]: var-lib-kubelet-pods-9575640c\x2df5fc\x2d4eca\x2d9b78\x2da781b5903216-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtrnrw.mount: Deactivated successfully. Mar 12 02:58:54.884112 systemd[1]: var-lib-kubelet-pods-9575640c\x2df5fc\x2d4eca\x2d9b78\x2da781b5903216-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 12 02:58:54.884149 systemd[1]: var-lib-kubelet-pods-9575640c\x2df5fc\x2d4eca\x2d9b78\x2da781b5903216-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 12 02:58:55.856851 sshd[4973]: Connection closed by 10.200.16.10 port 45528 Mar 12 02:58:55.857291 sshd-session[4970]: pam_unix(sshd:session): session closed for user core Mar 12 02:58:55.860119 systemd[1]: sshd@20-10.200.20.34:22-10.200.16.10:45528.service: Deactivated successfully. Mar 12 02:58:55.860452 systemd-logind[1890]: Session 23 logged out. Waiting for processes to exit. Mar 12 02:58:55.861971 systemd[1]: session-23.scope: Deactivated successfully. Mar 12 02:58:55.864460 systemd-logind[1890]: Removed session 23. Mar 12 02:58:55.885672 kubelet[3437]: I0312 02:58:55.885588 3437 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="46c28e00-9663-4822-8099-c81f5d7ff3ae" path="/var/lib/kubelet/pods/46c28e00-9663-4822-8099-c81f5d7ff3ae/volumes" Mar 12 02:58:55.886207 kubelet[3437]: I0312 02:58:55.885899 3437 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="9575640c-f5fc-4eca-9b78-a781b5903216" path="/var/lib/kubelet/pods/9575640c-f5fc-4eca-9b78-a781b5903216/volumes" Mar 12 02:58:55.946540 systemd[1]: Started sshd@21-10.200.20.34:22-10.200.16.10:45540.service - OpenSSH per-connection server daemon (10.200.16.10:45540). Mar 12 02:58:56.361845 sshd[5118]: Accepted publickey for core from 10.200.16.10 port 45540 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:58:56.362827 sshd-session[5118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:58:56.366422 systemd-logind[1890]: New session 24 of user core. Mar 12 02:58:56.370918 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 12 02:58:56.954412 kubelet[3437]: E0312 02:58:56.954375 3437 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 12 02:58:57.155311 systemd[1]: Created slice kubepods-burstable-podb980b710_166e_4910_8897_4ecb9470dc54.slice - libcontainer container kubepods-burstable-podb980b710_166e_4910_8897_4ecb9470dc54.slice. Mar 12 02:58:57.178699 sshd[5121]: Connection closed by 10.200.16.10 port 45540 Mar 12 02:58:57.179305 sshd-session[5118]: pam_unix(sshd:session): session closed for user core Mar 12 02:58:57.183952 systemd[1]: sshd@21-10.200.20.34:22-10.200.16.10:45540.service: Deactivated successfully. Mar 12 02:58:57.185552 systemd[1]: session-24.scope: Deactivated successfully. Mar 12 02:58:57.186567 systemd-logind[1890]: Session 24 logged out. Waiting for processes to exit. Mar 12 02:58:57.188470 systemd-logind[1890]: Removed session 24. Mar 12 02:58:57.195228 kubelet[3437]: I0312 02:58:57.194955 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b980b710-166e-4910-8897-4ecb9470dc54-cni-path\") pod \"cilium-6xmws\" (UID: \"b980b710-166e-4910-8897-4ecb9470dc54\") " pod="kube-system/cilium-6xmws" Mar 12 02:58:57.195228 kubelet[3437]: I0312 02:58:57.194984 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b980b710-166e-4910-8897-4ecb9470dc54-etc-cni-netd\") pod \"cilium-6xmws\" (UID: \"b980b710-166e-4910-8897-4ecb9470dc54\") " pod="kube-system/cilium-6xmws" Mar 12 02:58:57.195228 kubelet[3437]: I0312 02:58:57.194996 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b980b710-166e-4910-8897-4ecb9470dc54-lib-modules\") pod \"cilium-6xmws\" (UID: \"b980b710-166e-4910-8897-4ecb9470dc54\") " pod="kube-system/cilium-6xmws" Mar 12 02:58:57.195228 kubelet[3437]: I0312 02:58:57.195005 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b980b710-166e-4910-8897-4ecb9470dc54-cilium-run\") pod \"cilium-6xmws\" (UID: \"b980b710-166e-4910-8897-4ecb9470dc54\") " pod="kube-system/cilium-6xmws" Mar 12 02:58:57.195228 kubelet[3437]: I0312 02:58:57.195029 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b980b710-166e-4910-8897-4ecb9470dc54-hostproc\") pod \"cilium-6xmws\" (UID: \"b980b710-166e-4910-8897-4ecb9470dc54\") " pod="kube-system/cilium-6xmws" Mar 12 02:58:57.195228 kubelet[3437]: I0312 02:58:57.195040 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b980b710-166e-4910-8897-4ecb9470dc54-cilium-cgroup\") pod \"cilium-6xmws\" (UID: \"b980b710-166e-4910-8897-4ecb9470dc54\") " pod="kube-system/cilium-6xmws" Mar 12 02:58:57.195364 kubelet[3437]: I0312 02:58:57.195051 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b980b710-166e-4910-8897-4ecb9470dc54-host-proc-sys-net\") pod \"cilium-6xmws\" (UID: \"b980b710-166e-4910-8897-4ecb9470dc54\") " pod="kube-system/cilium-6xmws" Mar 12 02:58:57.195364 kubelet[3437]: I0312 02:58:57.195061 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b980b710-166e-4910-8897-4ecb9470dc54-host-proc-sys-kernel\") pod \"cilium-6xmws\" (UID: \"b980b710-166e-4910-8897-4ecb9470dc54\") " pod="kube-system/cilium-6xmws" Mar 12 02:58:57.195364 kubelet[3437]: I0312 02:58:57.195070 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b980b710-166e-4910-8897-4ecb9470dc54-clustermesh-secrets\") pod \"cilium-6xmws\" (UID: \"b980b710-166e-4910-8897-4ecb9470dc54\") " pod="kube-system/cilium-6xmws" Mar 12 02:58:57.195364 kubelet[3437]: I0312 02:58:57.195080 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b980b710-166e-4910-8897-4ecb9470dc54-hubble-tls\") pod \"cilium-6xmws\" (UID: \"b980b710-166e-4910-8897-4ecb9470dc54\") " pod="kube-system/cilium-6xmws" Mar 12 02:58:57.195364 kubelet[3437]: I0312 02:58:57.195088 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zcgj\" (UniqueName: \"kubernetes.io/projected/b980b710-166e-4910-8897-4ecb9470dc54-kube-api-access-2zcgj\") pod \"cilium-6xmws\" (UID: \"b980b710-166e-4910-8897-4ecb9470dc54\") " pod="kube-system/cilium-6xmws" Mar 12 02:58:57.195438 kubelet[3437]: I0312 02:58:57.195100 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b980b710-166e-4910-8897-4ecb9470dc54-xtables-lock\") pod \"cilium-6xmws\" (UID: \"b980b710-166e-4910-8897-4ecb9470dc54\") " pod="kube-system/cilium-6xmws" Mar 12 02:58:57.195438 kubelet[3437]: I0312 02:58:57.195110 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b980b710-166e-4910-8897-4ecb9470dc54-bpf-maps\") pod \"cilium-6xmws\" (UID: \"b980b710-166e-4910-8897-4ecb9470dc54\") " pod="kube-system/cilium-6xmws" Mar 12 02:58:57.195438 kubelet[3437]: I0312 02:58:57.195119 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b980b710-166e-4910-8897-4ecb9470dc54-cilium-config-path\") pod \"cilium-6xmws\" (UID: \"b980b710-166e-4910-8897-4ecb9470dc54\") " pod="kube-system/cilium-6xmws" Mar 12 02:58:57.195438 kubelet[3437]: I0312 02:58:57.195128 3437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b980b710-166e-4910-8897-4ecb9470dc54-cilium-ipsec-secrets\") pod \"cilium-6xmws\" (UID: \"b980b710-166e-4910-8897-4ecb9470dc54\") " pod="kube-system/cilium-6xmws" Mar 12 02:58:57.267160 systemd[1]: Started sshd@22-10.200.20.34:22-10.200.16.10:45552.service - OpenSSH per-connection server daemon (10.200.16.10:45552). Mar 12 02:58:57.464437 containerd[1908]: time="2026-03-12T02:58:57.464397118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6xmws,Uid:b980b710-166e-4910-8897-4ecb9470dc54,Namespace:kube-system,Attempt:0,}" Mar 12 02:58:57.508656 containerd[1908]: time="2026-03-12T02:58:57.508616951Z" level=info msg="connecting to shim d0b68f48e3fce634dcfe686cb2ede167351f3875f06cab8105f75513a8076e67" address="unix:///run/containerd/s/aae01e45eff1709f92a9ca6efc37f7fe17f3130a5fd4eceb14282050448a0792" namespace=k8s.io protocol=ttrpc version=3 Mar 12 02:58:57.522944 systemd[1]: Started cri-containerd-d0b68f48e3fce634dcfe686cb2ede167351f3875f06cab8105f75513a8076e67.scope - libcontainer container d0b68f48e3fce634dcfe686cb2ede167351f3875f06cab8105f75513a8076e67. Mar 12 02:58:57.542529 containerd[1908]: time="2026-03-12T02:58:57.542500287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6xmws,Uid:b980b710-166e-4910-8897-4ecb9470dc54,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0b68f48e3fce634dcfe686cb2ede167351f3875f06cab8105f75513a8076e67\"" Mar 12 02:58:57.551132 containerd[1908]: time="2026-03-12T02:58:57.551071645Z" level=info msg="CreateContainer within sandbox \"d0b68f48e3fce634dcfe686cb2ede167351f3875f06cab8105f75513a8076e67\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 12 02:58:57.568420 containerd[1908]: time="2026-03-12T02:58:57.568390735Z" level=info msg="Container dc4a7cdfd1d30118e1701fe1c997b2efa2ae6ad5494503fd8b22d226d6f39588: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:58:57.583502 containerd[1908]: time="2026-03-12T02:58:57.583466148Z" level=info msg="CreateContainer within sandbox \"d0b68f48e3fce634dcfe686cb2ede167351f3875f06cab8105f75513a8076e67\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"dc4a7cdfd1d30118e1701fe1c997b2efa2ae6ad5494503fd8b22d226d6f39588\"" Mar 12 02:58:57.583926 containerd[1908]: time="2026-03-12T02:58:57.583904605Z" level=info msg="StartContainer for \"dc4a7cdfd1d30118e1701fe1c997b2efa2ae6ad5494503fd8b22d226d6f39588\"" Mar 12 02:58:57.585104 containerd[1908]: time="2026-03-12T02:58:57.585082242Z" level=info msg="connecting to shim dc4a7cdfd1d30118e1701fe1c997b2efa2ae6ad5494503fd8b22d226d6f39588" address="unix:///run/containerd/s/aae01e45eff1709f92a9ca6efc37f7fe17f3130a5fd4eceb14282050448a0792" protocol=ttrpc version=3 Mar 12 02:58:57.597917 systemd[1]: Started cri-containerd-dc4a7cdfd1d30118e1701fe1c997b2efa2ae6ad5494503fd8b22d226d6f39588.scope - libcontainer container dc4a7cdfd1d30118e1701fe1c997b2efa2ae6ad5494503fd8b22d226d6f39588. Mar 12 02:58:57.621924 containerd[1908]: time="2026-03-12T02:58:57.621839631Z" level=info msg="StartContainer for \"dc4a7cdfd1d30118e1701fe1c997b2efa2ae6ad5494503fd8b22d226d6f39588\" returns successfully" Mar 12 02:58:57.624975 systemd[1]: cri-containerd-dc4a7cdfd1d30118e1701fe1c997b2efa2ae6ad5494503fd8b22d226d6f39588.scope: Deactivated successfully. Mar 12 02:58:57.627696 containerd[1908]: time="2026-03-12T02:58:57.627675525Z" level=info msg="received container exit event container_id:\"dc4a7cdfd1d30118e1701fe1c997b2efa2ae6ad5494503fd8b22d226d6f39588\" id:\"dc4a7cdfd1d30118e1701fe1c997b2efa2ae6ad5494503fd8b22d226d6f39588\" pid:5197 exited_at:{seconds:1773284337 nanos:627281566}" Mar 12 02:58:57.684845 sshd[5131]: Accepted publickey for core from 10.200.16.10 port 45552 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:58:57.686178 sshd-session[5131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:58:57.689518 systemd-logind[1890]: New session 25 of user core. Mar 12 02:58:57.693903 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 12 02:58:57.916424 sshd[5229]: Connection closed by 10.200.16.10 port 45552 Mar 12 02:58:57.916958 sshd-session[5131]: pam_unix(sshd:session): session closed for user core Mar 12 02:58:57.919780 systemd[1]: sshd@22-10.200.20.34:22-10.200.16.10:45552.service: Deactivated successfully. Mar 12 02:58:57.921265 systemd[1]: session-25.scope: Deactivated successfully. Mar 12 02:58:57.922005 systemd-logind[1890]: Session 25 logged out. Waiting for processes to exit. Mar 12 02:58:57.923298 systemd-logind[1890]: Removed session 25. Mar 12 02:58:58.018146 systemd[1]: Started sshd@23-10.200.20.34:22-10.200.16.10:45568.service - OpenSSH per-connection server daemon (10.200.16.10:45568). Mar 12 02:58:58.193477 containerd[1908]: time="2026-03-12T02:58:58.193251059Z" level=info msg="CreateContainer within sandbox \"d0b68f48e3fce634dcfe686cb2ede167351f3875f06cab8105f75513a8076e67\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 12 02:58:58.209467 containerd[1908]: time="2026-03-12T02:58:58.209441891Z" level=info msg="Container 37a7ecd8f40a0752954b9978a47b4e210c631932c66a59a3636987c55850b0c8: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:58:58.232207 containerd[1908]: time="2026-03-12T02:58:58.232177723Z" level=info msg="CreateContainer within sandbox \"d0b68f48e3fce634dcfe686cb2ede167351f3875f06cab8105f75513a8076e67\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"37a7ecd8f40a0752954b9978a47b4e210c631932c66a59a3636987c55850b0c8\"" Mar 12 02:58:58.232639 containerd[1908]: time="2026-03-12T02:58:58.232603675Z" level=info msg="StartContainer for \"37a7ecd8f40a0752954b9978a47b4e210c631932c66a59a3636987c55850b0c8\"" Mar 12 02:58:58.234012 containerd[1908]: time="2026-03-12T02:58:58.233989968Z" level=info msg="connecting to shim 37a7ecd8f40a0752954b9978a47b4e210c631932c66a59a3636987c55850b0c8" address="unix:///run/containerd/s/aae01e45eff1709f92a9ca6efc37f7fe17f3130a5fd4eceb14282050448a0792" protocol=ttrpc version=3 Mar 12 02:58:58.254947 systemd[1]: Started cri-containerd-37a7ecd8f40a0752954b9978a47b4e210c631932c66a59a3636987c55850b0c8.scope - libcontainer container 37a7ecd8f40a0752954b9978a47b4e210c631932c66a59a3636987c55850b0c8. Mar 12 02:58:58.281162 containerd[1908]: time="2026-03-12T02:58:58.281135248Z" level=info msg="StartContainer for \"37a7ecd8f40a0752954b9978a47b4e210c631932c66a59a3636987c55850b0c8\" returns successfully" Mar 12 02:58:58.284276 systemd[1]: cri-containerd-37a7ecd8f40a0752954b9978a47b4e210c631932c66a59a3636987c55850b0c8.scope: Deactivated successfully. Mar 12 02:58:58.285414 containerd[1908]: time="2026-03-12T02:58:58.285339080Z" level=info msg="received container exit event container_id:\"37a7ecd8f40a0752954b9978a47b4e210c631932c66a59a3636987c55850b0c8\" id:\"37a7ecd8f40a0752954b9978a47b4e210c631932c66a59a3636987c55850b0c8\" pid:5251 exited_at:{seconds:1773284338 nanos:285192618}" Mar 12 02:58:58.433034 sshd[5236]: Accepted publickey for core from 10.200.16.10 port 45568 ssh2: RSA SHA256:Z7iH1P3S73ZdxQIwiDYFg2VFhFwvaatKOiDPh/QZsqE Mar 12 02:58:58.434047 sshd-session[5236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 02:58:58.437487 systemd-logind[1890]: New session 26 of user core. Mar 12 02:58:58.442918 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 12 02:58:59.198136 containerd[1908]: time="2026-03-12T02:58:59.198100508Z" level=info msg="CreateContainer within sandbox \"d0b68f48e3fce634dcfe686cb2ede167351f3875f06cab8105f75513a8076e67\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 12 02:58:59.219913 containerd[1908]: time="2026-03-12T02:58:59.218486652Z" level=info msg="Container aa15d36b66d700d155ae9f6a33eac264fd1c78a0c5988bfb235443d1d5eaac61: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:58:59.219294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4115059553.mount: Deactivated successfully. Mar 12 02:58:59.235384 containerd[1908]: time="2026-03-12T02:58:59.235300931Z" level=info msg="CreateContainer within sandbox \"d0b68f48e3fce634dcfe686cb2ede167351f3875f06cab8105f75513a8076e67\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"aa15d36b66d700d155ae9f6a33eac264fd1c78a0c5988bfb235443d1d5eaac61\"" Mar 12 02:58:59.235842 containerd[1908]: time="2026-03-12T02:58:59.235818999Z" level=info msg="StartContainer for \"aa15d36b66d700d155ae9f6a33eac264fd1c78a0c5988bfb235443d1d5eaac61\"" Mar 12 02:58:59.237907 containerd[1908]: time="2026-03-12T02:58:59.237568177Z" level=info msg="connecting to shim aa15d36b66d700d155ae9f6a33eac264fd1c78a0c5988bfb235443d1d5eaac61" address="unix:///run/containerd/s/aae01e45eff1709f92a9ca6efc37f7fe17f3130a5fd4eceb14282050448a0792" protocol=ttrpc version=3 Mar 12 02:58:59.259920 systemd[1]: Started cri-containerd-aa15d36b66d700d155ae9f6a33eac264fd1c78a0c5988bfb235443d1d5eaac61.scope - libcontainer container aa15d36b66d700d155ae9f6a33eac264fd1c78a0c5988bfb235443d1d5eaac61. Mar 12 02:58:59.313636 systemd[1]: cri-containerd-aa15d36b66d700d155ae9f6a33eac264fd1c78a0c5988bfb235443d1d5eaac61.scope: Deactivated successfully. Mar 12 02:58:59.319006 containerd[1908]: time="2026-03-12T02:58:59.318975616Z" level=info msg="received container exit event container_id:\"aa15d36b66d700d155ae9f6a33eac264fd1c78a0c5988bfb235443d1d5eaac61\" id:\"aa15d36b66d700d155ae9f6a33eac264fd1c78a0c5988bfb235443d1d5eaac61\" pid:5298 exited_at:{seconds:1773284339 nanos:315198592}" Mar 12 02:58:59.324897 containerd[1908]: time="2026-03-12T02:58:59.324848151Z" level=info msg="StartContainer for \"aa15d36b66d700d155ae9f6a33eac264fd1c78a0c5988bfb235443d1d5eaac61\" returns successfully" Mar 12 02:58:59.334459 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa15d36b66d700d155ae9f6a33eac264fd1c78a0c5988bfb235443d1d5eaac61-rootfs.mount: Deactivated successfully. Mar 12 02:59:00.201974 containerd[1908]: time="2026-03-12T02:59:00.201854157Z" level=info msg="CreateContainer within sandbox \"d0b68f48e3fce634dcfe686cb2ede167351f3875f06cab8105f75513a8076e67\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 12 02:59:00.223862 containerd[1908]: time="2026-03-12T02:59:00.223453634Z" level=info msg="Container cb7cbde6d11f9cd86e0a550d7e4f18cd37fc638d4bff39fd1170ef1bdf2674e5: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:59:00.237495 containerd[1908]: time="2026-03-12T02:59:00.237468071Z" level=info msg="CreateContainer within sandbox \"d0b68f48e3fce634dcfe686cb2ede167351f3875f06cab8105f75513a8076e67\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cb7cbde6d11f9cd86e0a550d7e4f18cd37fc638d4bff39fd1170ef1bdf2674e5\"" Mar 12 02:59:00.238372 containerd[1908]: time="2026-03-12T02:59:00.237831693Z" level=info msg="StartContainer for \"cb7cbde6d11f9cd86e0a550d7e4f18cd37fc638d4bff39fd1170ef1bdf2674e5\"" Mar 12 02:59:00.238494 containerd[1908]: time="2026-03-12T02:59:00.238475349Z" level=info msg="connecting to shim cb7cbde6d11f9cd86e0a550d7e4f18cd37fc638d4bff39fd1170ef1bdf2674e5" address="unix:///run/containerd/s/aae01e45eff1709f92a9ca6efc37f7fe17f3130a5fd4eceb14282050448a0792" protocol=ttrpc version=3 Mar 12 02:59:00.256916 systemd[1]: Started cri-containerd-cb7cbde6d11f9cd86e0a550d7e4f18cd37fc638d4bff39fd1170ef1bdf2674e5.scope - libcontainer container cb7cbde6d11f9cd86e0a550d7e4f18cd37fc638d4bff39fd1170ef1bdf2674e5. Mar 12 02:59:00.274862 systemd[1]: cri-containerd-cb7cbde6d11f9cd86e0a550d7e4f18cd37fc638d4bff39fd1170ef1bdf2674e5.scope: Deactivated successfully. Mar 12 02:59:00.280177 containerd[1908]: time="2026-03-12T02:59:00.279545095Z" level=info msg="received container exit event container_id:\"cb7cbde6d11f9cd86e0a550d7e4f18cd37fc638d4bff39fd1170ef1bdf2674e5\" id:\"cb7cbde6d11f9cd86e0a550d7e4f18cd37fc638d4bff39fd1170ef1bdf2674e5\" pid:5339 exited_at:{seconds:1773284340 nanos:275449907}" Mar 12 02:59:00.284437 containerd[1908]: time="2026-03-12T02:59:00.284415376Z" level=info msg="StartContainer for \"cb7cbde6d11f9cd86e0a550d7e4f18cd37fc638d4bff39fd1170ef1bdf2674e5\" returns successfully" Mar 12 02:59:00.292545 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb7cbde6d11f9cd86e0a550d7e4f18cd37fc638d4bff39fd1170ef1bdf2674e5-rootfs.mount: Deactivated successfully. Mar 12 02:59:01.207863 containerd[1908]: time="2026-03-12T02:59:01.207825436Z" level=info msg="CreateContainer within sandbox \"d0b68f48e3fce634dcfe686cb2ede167351f3875f06cab8105f75513a8076e67\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 12 02:59:01.227005 containerd[1908]: time="2026-03-12T02:59:01.226975100Z" level=info msg="Container eb52a0a49b1f6d165ec947fd5bc5a4c92088fb066e0276ead411b1735a912298: CDI devices from CRI Config.CDIDevices: []" Mar 12 02:59:01.248071 containerd[1908]: time="2026-03-12T02:59:01.247990411Z" level=info msg="CreateContainer within sandbox \"d0b68f48e3fce634dcfe686cb2ede167351f3875f06cab8105f75513a8076e67\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"eb52a0a49b1f6d165ec947fd5bc5a4c92088fb066e0276ead411b1735a912298\"" Mar 12 02:59:01.248675 containerd[1908]: time="2026-03-12T02:59:01.248648212Z" level=info msg="StartContainer for \"eb52a0a49b1f6d165ec947fd5bc5a4c92088fb066e0276ead411b1735a912298\"" Mar 12 02:59:01.249690 containerd[1908]: time="2026-03-12T02:59:01.249626209Z" level=info msg="connecting to shim eb52a0a49b1f6d165ec947fd5bc5a4c92088fb066e0276ead411b1735a912298" address="unix:///run/containerd/s/aae01e45eff1709f92a9ca6efc37f7fe17f3130a5fd4eceb14282050448a0792" protocol=ttrpc version=3 Mar 12 02:59:01.266936 systemd[1]: Started cri-containerd-eb52a0a49b1f6d165ec947fd5bc5a4c92088fb066e0276ead411b1735a912298.scope - libcontainer container eb52a0a49b1f6d165ec947fd5bc5a4c92088fb066e0276ead411b1735a912298. Mar 12 02:59:01.297954 containerd[1908]: time="2026-03-12T02:59:01.297917821Z" level=info msg="StartContainer for \"eb52a0a49b1f6d165ec947fd5bc5a4c92088fb066e0276ead411b1735a912298\" returns successfully" Mar 12 02:59:01.536076 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 12 02:59:02.217637 kubelet[3437]: I0312 02:59:02.217588 3437 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-6xmws" podStartSLOduration=5.217575178 podStartE2EDuration="5.217575178s" podCreationTimestamp="2026-03-12 02:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 02:59:02.217249238 +0000 UTC m=+140.399065466" watchObservedRunningTime="2026-03-12 02:59:02.217575178 +0000 UTC m=+140.399391406" Mar 12 02:59:03.880571 systemd-networkd[1498]: lxc_health: Link UP Mar 12 02:59:03.895566 systemd-networkd[1498]: lxc_health: Gained carrier Mar 12 02:59:05.173958 systemd-networkd[1498]: lxc_health: Gained IPv6LL Mar 12 02:59:09.100333 sshd[5279]: Connection closed by 10.200.16.10 port 45568 Mar 12 02:59:09.100957 sshd-session[5236]: pam_unix(sshd:session): session closed for user core Mar 12 02:59:09.103858 systemd-logind[1890]: Session 26 logged out. Waiting for processes to exit. Mar 12 02:59:09.103967 systemd[1]: sshd@23-10.200.20.34:22-10.200.16.10:45568.service: Deactivated successfully. Mar 12 02:59:09.105408 systemd[1]: session-26.scope: Deactivated successfully. Mar 12 02:59:09.108045 systemd-logind[1890]: Removed session 26.